Making Governance Work

 

In my most recent articles on Approaching Artificial Intelligence and Transformation, I highlight the importance of discipline in achieving business outcomes.  To that end, governance is a critical aspect of any large-scale transformation or delivery effort because it both serves to reduce risk and inform change on an ongoing basis, both of which are an inevitable reality of these kinds of programs.

The purpose of this article is to discuss ways to approach governance overall, to avoid common concerns, and to establish core elements that will increase the probability it will be successful.  Having seen and established many PMOs and governance bodies over time, I can honestly say that they are difficult to put in place for as many intangible reasons as anything mechanical, hopefully the nature of which will be addressed below.

 

Have the Right Mindset

Before addressing the execution “dos” and “don’ts”, success starts with understanding that governance is about successful delivery, not pure oversight.  Where delivery is the priority, the focus is typically on enablement and support.  By contrast, where the focus is the latter, emphasis can be placed largely on controls and intervention.  The reality is that both are needed, which will be discussed more below, but starting with an intention to help delivery teams generally should translate into a positive and supportive environment, where collaboration is encouraged.  If, by comparison, the role of governance is relegated to finding “gotchas” and looking for issues without providing teams will guidance or solutions, the effort likely won’t succeed.  Healthy relationships and trust are critical to effective governance, because they encourage transparent and open dialogue.  Without that, likely the process will break down or be ineffective somewhere along the way.

In a perfect world, delivery teams should want to participate in a governance process because it helps them do their work.

 

Addressing the Challenges

Suggesting that you want to initiate a governance process can be a very uncomfortable conversation.  As a consultant, clients can feel like it is something being done “to” them, with a third-party reporting on their work to management.  As a corporate citizen, it can feel like someone is trying to exercise a level of control over their peers in a leadership team and, consequently, limiting individual autonomy and empowerment in some way.  This is why relationships and trust are critically important.  Governance is a partnership and it is about increasing the probability of successful outcomes, not adding a layer of management over people who are capable of doing their jobs with the right level of support.

That being said, three things are typically said when the idea of establishing governance is introduced: that it will slow things down, hinder value creation, and add unnecessary overhead to teams that are already “too busy” or rushing to a deadline.  I’ll focus on each of these in turn, along with what can be done to address the concerns in how you approach things.

 

It Slows Things Down

As I wrote in my article on Excellence by Design, delivering at speed matters.  Lack of oversight can lead to efforts going off the rails without the timely interventions and support that cause delays and budget overruns.  That being said, if the process slows everything down, you aren’t necessarily helping teams deliver either. 

A fundamental question is whether your governance process is meant to be a “gate” or a “checkpoint”.

In the case of a gate, they can be very disruptive, so there should be compliance or risk-driven concerns (e.g., security or data privacy) that necessitate stopping or delaying some or all of a project until certain defined criteria or standards are met.  If a process is gated, then this should be factored into estimation and planning at the outset, so expectations are set and managed accordingly, and to avoid the “we don’t have time for this” discussion that otherwise could happen.  Gating criteria and project debriefs / retrospectives should also be reviewed to ensure standards and guidelines are updated to help both mitigate risk and encourage accelerated delivery, which is a difficult balance to strike.  In principle, the more disciplined an environment is, the less “gating” should be needed, because teams are already following standards, doing proper quality assurance, and so on, and risk management should be easier on an average effort.

When it comes to “checkpoints”, there should be no difference in terms of the level of standards and guidelines in place, it’s about how they are handled in the course of the review discussion itself.  When critical criteria are missed in a gate, there is a “pause and adjust” approach, whereas a checkpoint would note the exception and requested remedy, ideally along with a timeframe for doing so.  The team is allowed to continue forward, but with an explicit assumption that they will make adjustments so the overall solution integrity is maintained in line with expectations.  This is where a significant amount of technical debt and delivery issues are created.  There is a level of trust involved in a checkpoint process, because the delivery team may choose not to remediate any issues, in which case the purpose and value of standards can be undermined, and a significant amount of complexity and risk is introduced as a result.  If this becomes a pattern over time, it may make sense to shift towards a more gated process if things like security, privacy, or other critical issues are being created.

Again, the goal of governance is to remove barriers, provide resources where required, and to enable successful delivery, but there is a handshake involved to the degree that the process integrity needs to be managed overall.  My general point of view is to trust teams to do the right thing and to leverage a checkpoint versus a gated process, but that is predicated on ensuring standards and quality are maintained.  To the delivery discipline isn’t where it needs to be, a stronger process may be appropriate.

 

It Erodes Value

To the extent that the process is perceived to be pure overhead, it is important to clarify the overall goals of the process and, to the extent possible, to identify some metrics that can be used to signal whether it is being effective in helping to promote a healthy delivery environment.

At an overall level, the process is about reducing risk, promoting speed and enablement, and increasing the probability of successful delivery.  Whether that is measured in changes in budget and schedule variance, issues remediated pre-deployment, or by a downstream measure of business value created through initiatives delivered on time, there should be a clear understanding of what the desired outcomes are and a sanity check that they are being met.

Arguably, where standards are concerned, this can be difficult to evaluate and measure, but certainly the increase in technical debt that is created in an environment that lacks standards and governance, cost of operations, and percentage of effort directed and build versus run on an overall level can be monitored and evaluated.

 

It Adds Overhead

I remember taking an assignment to help clean up the governance of a delivery environment many years ago where the person leading the organization was receiving a stack of updates every week that was  literally three feet of documents when printed, spanning hundreds of projects.  It goes without saying that all of that reporting provided nothing actionable, beyond everyone being able to say that they were “reporting out” on their delivery efforts on an ongoing basis.  It was also the case that the amount of time project and program managers were focused on updating all that documentation was substantial.  This is not governance.  This is administration and a waste of resources.  Ultimately, by changing the structure of the process, defining standards, and level of information being reported, the outcome was a five-page summary that covered critical programs, ongoing maintenance, production, and key metrics that was produced with considerably less effort and provided much better transparency into the environment.

The goal of governance is providing support, not producing reams of documentation.  Ideally, there should be a critical minimum amount of information requested from teams to support a discussion on what they are doing, where they are in the delivery process, the risks or challenges they are facing, and what help (if any) they may need.  To the degree that you can leverage artifacts the team is already producing so there is little to no extra effort involved in preparing for a discussion, even better.  And, as another litmus test, everything included in a governance discussion should serve a purpose and be actionable.  Anything else likely is a waste of time and resources.

 

Making Governance Effective

Having addressed some of the common concerns and issues, there are also things that should be considered that increase the probability of success.

 

Allow for Evolution

As I mentioned in the opening, the right mindset has a significant influence on making governance successful.  Part of that is understanding it will never be perfect.  I believe very strongly in launching governance discussions and allowing feedback and time to mature the process and infrastructure given real experience with what works and what everyone needs.

One of the best things that can be done is to track and monitor delivery risks and technology-related issues and use those inputs to guide and prioritize the standards and guidelines in place.  Said differently, you don’t need governance to improve things you already do well, you leverage it (primarily) to help you address risks and gaps you have and to promote quality.

Having seen an environment where a team was “working on” establishing a governance process over an extended period of time versus one that was stood up inside 30 days, I’d rather have the latter process in place and allow for it to evolve than one that is never launched.

 

Cover the Bases

In the previous section, I mentioned leveraging a critical minimum amount of information to facilitate the process, ideally utilizing artifacts a team already has.  Again, it’s not about the process, it’s about the discussion and enabling outcomes.

That being said, since trust and partnership are important, even in a fairly bare bones governance environment, there should be transparency into what the process is, when it should be applied, who should attend, expectations of all participants, and a consistent cadence with which it is conducted.

It should be possible to have ad-hoc discussions if needed, but there is something contradictory to suggesting that governance is a key component to a disciplined environment and not being able to schedule the discussions themselves consistently.  Anecdotally, when we conducted project review discussions in my time at Sapient, it was commonly understood that if a team was ever “too busy” to schedule their review, they probably needed to have it as soon as possible, so the reason they were overwhelmed or too busy was clear.

 

Satisfy Your Stakeholders

The final dimension to consider in making governance effective is understanding and satisfying the stakeholders surrounding it, starting with the teams.  Any process can and should evolve, and that evolution should be based on experience obtained executing the process itself, monitoring operating metrics on an ongoing basis, and feedback that is continually gathered to make it more effective.

That being said, if the process never surfaces challenges and risks, it likely isn’t working properly, because governance is meant to do exactly that, along with providing teams with the support they need.  Satisfying stakeholders doesn’t mean painting an unrealistically positive picture, especially if there are fundamental issues in the underlying environment. 

I have seen situations where teams were encouraged to share inaccurate information about the health of their work in the interest of managing perceptions and avoiding difficult conversations that were critically needed.  This is why having an experienced team leading the conversations and a healthy, supportive, and trusting environment is so important.  Governance is needed because things do happen in delivery.  Technology work is messy and complicated and there are always risks that materialize.  The goal is to see them and respond before they have consequential impact.

 

Wrapping Up

Hopefully I’ve managed to hit some of the primary points to consider when establishing or evaluating a governance process.  There are many dimensions, but the most important ones are first, focusing on value and, second, on having the right mindset, relationships, and trust.  The process is too often the focus, and without the other parts, it will fail.  People are at the center of making it work, nothing else.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/31/2025

Approaching AI Strategy

Overview

In my first blog article back in 2021, I wrote that “we learn to value experience only once we actually have it”… and one thing I’ve certainly realized is that it’s much easier to do something quickly than to do it well.  The problem is that excellence requires discipline, especially when you want to scale or have sustainable results, and that often comes into conflict with a natural desire to achieve speed in delivery.

There is a tremendous amount of optimism in the transformative value AI can create across a wide range of areas.  While much continues to be written about various tools, technologies, and solutions, there is value in having a structured approach to developing AI strategy and how we will govern it once it is implemented across an organization.

Why?  We want results.

Some historical examples on why there is a case for action:

  • Many organizations have leveraged SharePoint as a way to manage documents. Because it’s relatively easy to use, access to the technology generally is provided to a broad set of users, with little or no guidance on how to use it (e.g., metatagging strategy), and over time there becomes a sprawl of content that may contain critical, confidential, or proprietary information with limited overall awareness of what exists and where
  • In the last number of years, Citizen Development has become popular, with the rise of low code, no code, and RPA tools, creating accessibility to automation that is meant to enable business (and largely non-technical) resources to rapidly create solutions, from the trivial to relatively complex. Quite often these solutions aren’t considered part of a larger application portfolio, are managed with little or no oversight, and become difficult to integrate, leverage, or support effectively
  • In data and analytics, tools like Alteryx can be deployed across a broad set of users who, after they are given access to requested data sources, create their own transformations, dashboards, and other analytical outputs to inform ongoing business decisions. The challenge occurs when the underlying data changes, is not understood properly (and downstream inferences can be incorrect), or these individuals leave or transition out of their roles and the solutions they built are not well understood or difficult for someone else to leverage or support

What these situations have in common is the introduction of something meant to serve as an enabler that has relative ease of use and accessibility across a broad audience, but where there also may be a lack of standards and governance to make sure the capabilities are introduced in a thoughtful and consistent manner, leading to inefficiency, increased cost, and lost opportunity.  With the amount of hype surrounding AI, the proliferation of tools, and general ease of use that they provide, the potential for organizations to create a mess in the wake of their experimentation with these technologies seems very significant. 

The focus of the remainder of this article is to explore some dimensions to consider in developing a strategy for the effective use and governance of AI in an organization.  The focus will be on the approach, not the content of an AI strategy, which can be the subject of a later article.  I am not suggesting that everything needs to be prescriptive, cumbersome, or bureaucratic to the point that nothing can get done, but I believe it is important to have a thoughtful approach to avoid the pitfalls that are common to these situations.

To the extent that, in some organizations, “governance” implies control versus enablement or there are historical real or perceived IT delivery issues, there may be concern with heading down this path.  Regardless of how the concepts are implemented, I believe they are worth considering sooner rather than later, given we are still relatively early in the adoption process of these capabilities.

Dimensions to Consider

Below are various aspects of establishing a strategy and governance process for AI that are worth consideration.  I listed them somewhat in a sequential manner, as I’d think about them personally, though that doesn’t imply you can’t explore and elaborate as many as are appropriate in parallel, and in whatever order makes sense.  The outcome of the exercise doesn’t need to be rigid mandates, requirements, or guidelines per se, but nearly all of these topics likely will come up implicitly or otherwise as we delve further into leveraging these technologies moving forward.

Lead with Value

The first dimension is probably the most important in forming an AI strategy, which is to articulate the business problems being solved and value that is meant to be created.  It is very easy with new technologies to focus on the tools and not the outcomes and start implementing without a clear understanding of the impact that is intended.  As a result, measuring the value created and governing the efficacy of the solutions delivered becomes extremely difficult.

As a person who does not believe in deploying technology for technology’s sake, identifying, tracking, and measuring impact is important in knowing we will ultimately make informed decisions in how we leverage new capabilities and invest in them appropriately over time.

Treat Solutions as Assets

Along the lines of the above point, there is risk associated with being consumed by what is “cool” versus what is “useful” (something I’ve written about previously), and treating new technologies like “gadgets” versus actual business solutions.  Where we treat our investments as assets, the associated discipline we apply in making decisions surrounding them should be greater.  This is particularly important in emerging technology because the desire to experiment and leverage new tools could quickly become unsustainable as the number of one-off solutions grows and is unsupportable, eventually draining resources from new innovation.

Apply a Lifecycle Mindset

When leveraging a new technical capability, I would argue that we should look for opportunities to think of the full product lifecycle when it comes to how we identify, define, design, develop, manage, and retire solutions.  In my experience, the identify (finding new tools) and develop (delivering new solutions) aspects of the process receive significant emphasis in a speed-to-market environment, but the others much less so, and often to the overall detriment of an organization when they quickly are saddled with the resulting technical debt that comes from neglecting some of the other steps in the process.  This doesn’t necessarily imply a lot of additional steps, process overhead, or time/effort to be expended, but there is value created in each step of a product lifecycle (particularly in the early stages) and all of them need to be given due consideration if you want to establish a sustainable, performant environment.  The physical manifestation of some these steps could be as simple as a checklist to make sure there aren’t blind spots that arise later on that were avoidable or that create business risk.

Define Operating Model

Introducing new capabilities, especially ones where the barrier to entry/ease of use allows for a wide audience of users can cause unintended consequences if not managed effectively.  While it’s tempting to draw a business/technology dividing line, my experience has been that there can be very technically capable business consumers of technology and very undisciplined technologists who implement it as well.  The point of thinking through the operating model is to identify roles and responsibilities in how you will leverage new capabilities so that expectations and accountability is clear, along with guidelines for how various teams are meant to collaborate over the lifecycle mentioned above.

Whether the goal is to “empower end users” by fully distributing capabilities across teams, with some level of centralized support and governance, or fully centralizing with decentralized demand generation (or any flavor in between), the point is to understand who is best positioned to contribute at different steps of the process and promote consistency to an appropriate level so performance and efficacy of both the process and eventual solutions is something you can track, evaluate, and improve over time.  As an example, it would likely be very expensive and ineffective to hire a set of “prompt engineers” that operate in a fully distributed manner in a larger organization by comparison with having a smaller, centralized set of highly skilled resources who can provide guidance and standards to a broader set of users in a de-centralized environment.

Following onto the above, it is also worthwhile to decide whether and how these kinds of efforts should show up in a larger portfolio management process (to the extent one is in place).  Where AI and agentic solutions are meant to displace existing ways of working or produce meaningful business outcomes, the time spent delivering and supporting these solutions should likely be tracked so there is an ability to evaluate and manage these investments over time.

Standardize Tools

This will likely be one of the larger issues that organizations face, particularly given where we are with AI in a broader market context today.  Tools and technologies are advancing at such a rapid rate that having a disciplined process for evaluating, selecting, and integrating a specific set of “approved” tools is and will be challenging for some time.

While asking questions of a generic large language model like ChatGPT, Grok, DeepSeek, etc. and changing from one to the other seems relatively straightforward, there is a lot more complexity involved when we want to leverage company-specific data and approaches like RAG to produce more targeted and valuable outcomes.

When it comes to agentic solutions, there is also a proliferation of technologies at the moment.  In these cases, managing the cost, complexity, performance, security, and associated data privacy issues will also become complex if there aren’t “preferred” technologies in place and “known good” ways in which they can be leveraged.

Said differently, if we believe effective use of AI is critical to maintaining competitive advantage, we should know that the tools we are leveraging are vetted, producing quality results, and that we’re using them effectively.

Establish Critical Minimum Documentation

I realize it’s risky to use profanity in a professional article, but documentation has to be mentioned if we assume AI is a critical enabler for businesses moving forward.  Its importance can probably be summarized if you fast forward one year from today, hold a leadership meeting, and ask “what are all the ways we are using artificial intelligence, and is it producing the value we expected a year ago?”  If the response contains no specifics and supporting evidence, there should be cause for concern, because there will be significant investment made in this area over the next 1-2 years, and tracking those investments is important to realizing the benefits that are being promised everywhere you look.

Does “documentation” mean developing a binder for every prompt that is created, every agent that’s launched, or every solution that’s developed?  No, absolutely not, and that would likely be a large waste of money for marginal value.  There should be, however, a critical minimum amount of documentation that is developed in concert with these solutions to clarify their purpose, intended outcome/use, value to be created, and any implementation particulars that may be relevant to the nature of the solution (e.g. foundational model, data sets leveraged, data currency assumptions, etc.).  An inventory of the assets developed should exist, minimally so that it can be reviewed and audited for things like security, compliance, IP, and privacy-related concerns where applicable.

Develop Appropriate Standards

There are various types of solutions that could be part of an overall AI strategy and the opportunity to develop standards that promote quality, reuse, scale, security, and so forth is significant.  Whether it takes the form of a “how to” guide for writing prompts, to data sourcing and refresh standards with RAG-enabled solutions, reference architecture and design patterns across various solution types, or limits to the number of agents that can be developed without review for optimization opportunities… In this regard, something pragmatic, that isn’t overly prescriptive but that also doesn’t reflect a total lack of standards would be appropriate in most organizations.

In a decentralized operating environment, the chance that solutions will be developed in a one-off fashion, with varying levels of quality, consistency, and standardization is highly probable and that could create issues with security, scalability, technical debt, and so on.  Defining the handshake between consumers of these new capabilities and those developing standards, along with when it is appropriate to define them, could be important things to consider.

Design Solutions

Again, as I mentioned in relation to the product lifecycle mindset, there can be a strong preference to deliver solutions without giving much thought to design.  While this is often attributed to “speed to market” and a “bias towards action”, it doesn’t take long for tactical thinking to lead to a considerable amount of technical debt, an inability to reuse or scale solutions, or significant operating costs that start to slow down delivery and erode value.  These are avoidable consequences when thought is given to architecture and design up front and the effort nearly always pays off over time.

Align to Data Strategy

This topic could be an article in itself, but suffice is to say that having an effective AI strategy is heavily dependent on an organization’s overall data strategy and the health of that portfolio.  Said differently: if your underlying data isn’t in order, you won’t be able to derive much in terms of meaningful insights from it.  Concerns related to privacy and security, data sourcing, stewardship, data quality, lineage and governance, use of multiple large language models (LLMs), effective use of RAG, the relationship of data products to AI insights and agents, and effective ways of architecting for agility, interoperability, composability, evolution, and flexibility are all relevant topics to be explored and understood.

Define and Establish a Governance Process

Having laid out the above dimensions in terms of establishing and operationalizing an AI strategy, there needs to be a way to govern it.  The goal of governance is to achieve meaningful business outcomes by promoting effective use and adoption of the new capabilities, while managing exposure related to introducing change into the environment.  This could be part of an existing governance process or set up in parallel and coordinated with others in place, but the point is that you can’t optimize what you don’t monitor and manage, and the promise of AI is such that we should be thoughtful about how we govern its adoption across an organization.

Wrapping Up

I hope the ideas were worth considering.  For more on my thoughts on AI in particular, my articles Exploring Artificial Intelligence and Bringing AI to the End User can provide some perspective for those who are interested.

Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/17/2025

The Seeds of Transformation

Introduction

I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.” – John F Kennedy, May 25, 1961

When JFK made his famous pronouncement in 1961, the United States was losing in the space race.  The Soviet Union was visibly ahead, to the point that the government shuffled the deck, bringing together various agencies to form NASA, and set a target far out ahead of where anyone was focused at the time: landing on the Moon.  The context is important as the U.S. was not operating from a position of strength and JFK didn’t shoot for parity or to remain in a defensive posture. Instead, he leaned in and set an audacious goal that redefined the playing field entirely.

I spoke at a town hall fairly recently about “The Saturn V Story”, a documentary that covers the space race and journey leading to the Apollo 11 moon landing on July 20, 1969.  The scale and complexity of what accomplished in a relatively short timeframe was truly incredible and feels like a good way to introduce a Transformation discussion.  The Apollo program engaged 375,000 people at its peak, required extremely thoughtful planning and coordination (including the Mercury and Gemini programs that preceded it), and presented a significant number of engineering challenges that needed to be overcome to achieve its ultimate goal.  It’s an inspiring story, as any successful transformation effort should be.

The challenge is that true transformation is exceptionally difficult and many of these efforts fail or fall short of their stated objectives.  The remainder of this article will highlight some key dimensions that I believe are critical in increasing the probability of success.

Transformation is a requirement of remaining competitive in a global digital economy.  The disruptions (e.g., cloud computing, robotics, orchestration, artificial intelligence, cyber security exposure, quantum computing) have and will continue to occur, and success will be measured, in part, based on an organization’s ability to continuously transform, leveraging advanced capabilities to its’ maximum strategic benefit.

Successful Transformation

Culture versus Outcome

Before diving into the dimensions themselves, I want to emphasize the difference I see between changing culture and the kind of transformation I’m referencing in this article.  Culture is an important aspect to affecting change, as I will discuss in the context of the dimensions themselves, but a change in culture that doesn’t lead to a corresponding change in results is relatively meaningless.

To that end, I would argue that it is important to think about “change management” as a way to transition between the current and desired ways of working in a future state environment, but with specific, defined outcomes attached to the goal

It is insufficient, as an example, to express “we want to establish a more highly collaborative workplace that fosters innovation” without also being able to answer the questions: “To what end?” or “In the interest of accomplishing what?”  Arguably, it is the desired outcome that sets the stage for the nature of the culture that will be required, both to get to the stated goal as well as to operate effectively once those goals are achieved.  In my experience, this balance isn’t given enough thought when change efforts are initiated, and it’s important to make sure culture and desired outcomes are both clear and aligned with each other.

For more on the fundamental aspects of a healthy environment, please see my article on The Criticality of Culture.

What it Takes

Successful transformation efforts require focus on many levels and in various dimensions to manage what ultimately translates to risk.

The set that come to mind as most critical are having:

  • An audacious goal
    • Transformation is, in itself, a fundamental (not incremental) change in what an organization is able to accomplish
    • To the extent that substantial change is difficult, the value associated with the goal needs to outweigh the difficulties (and costs) that will be required to transition from where you are to where you need to be
    • If the goal also isn’t compelling enough, likely there won’t be the requisite level of individual and collective investment required to overcome the adversity that is typically part of these efforts. This is not just about having a business case.  It’s a reason for people to care… and that level of investment matters where transformation is the goal
  • Courageous, committed leadership
    • Change is, by its’ nature, difficult and disruptive. There will be friction and resistance that comes from altering the status quo
    • The requirements of leadership in these efforts tend to be very high, because of the adversity and risk that can be involved, and a degree of fearlessness and willingness to ride through the difficulties is important
    • Where this level of leadership isn’t present, it will become easy to focus on obstacles versus solutions and to avoid taking risks that lead to suboptimized results or overall failure of the effort. If it was easy to transform, everyone would be doing it all the time
    • It is worth noting that, in the case of the Apollo missions, JFK wasn’t there to see the program through, yet it survived both his passing and significant events like the Apollo fire without compromising the goal itself
    • A question to consider in this regard: Is the goal so compelling that, if the vision holder / sponsor were to leave, the effort would still move forward? There are many large-scale efforts I’ve seen over the years where a change in leadership affects the commitment to a strategy.  There may be valid reasons for this to be the case, but arguably both a worthy goal and strong leadership are necessary components in transformation overall
  • An aligned and supportive culture
    • There is a significant aspect of accomplishing a transformational agenda that places a burden on culture
    • On this point, the going-in position matters in the interest of mapping out the execution approach, because anything about the environment that isn’t conducive to facilitating and enabling collaboration and change will ultimately create friction that needs to be addressed and (hopefully) overcome
    • To the extent that the organization works in silos or that there is significant and potentially unhealthy internal competition within and across leaders, the implications of those conflicts need to be understood and mitigated early on (to the degree possible) so as to avoid what could lead to adverse impacts on the effort overall
    • As a leader said to me very early in my career, “There is room enough in success for everybody.” Defining success at an individual and collective level may be a worthwhile activity to consider depending on the nature of where an organization is when starting to pursue change
    • On this final point, I have been in the situation more than once professionally where a team worked to actively undermine transformation objectives because those efforts had an adverse impact to their broader role in an organization. This speaks, in part, to the importance of engaged, courageous leadership to bring teams into alignment, but where that leadership isn’t present, it definitely makes things more difficult.  Said differently, the more established the status quo is, the harder it may resist change
  • A thoughtful approach
    • “Rome was not built in a day” is probably the best way to summarize this point
    • Depending on the level of complexity and degree of change involved, the more thought and attention that needs to be paid to planning out the approach itself
    • The Apollo program is a great example of this, because there were countless interim stages in the development of the Saturn V rocket, creating a safe environment for manned space flight, procedures for rendezvous and docking of the spacecraft, etc.
    • In a technology delivery environment, these can be program increments in a scaled Agile environment, selective “pilots” or “proof-of-concept” efforts, or interim deliveries in a more component-based (and service-driven) architecture. The overall point being that it’s important to map out the evolution of current to future state, allowing for testing and staging of interim goals that help reduce risk on the ultimate objectives
    • In a different example, when establishing an architecture capability in a large, complex organization, we established an operating model to define roles and responsibilities, but then operationalized the model in layers to help facilitate change with defined outcomes spread across multiple years. This was done purposefully and deliberately in the interest of making the changes sustainable and to gradually shift delivery culture to be more strategically-aligned, disciplined, and less siloed in the process
  • Agility and adaptiveness
    • The more advanced and innovative the transformation effort is, the more likely it will be that there is a higher degree of unknown (and knowledge risk) associated with the effort
    • To that end, it is highly probable that the approach to execution will evolve over time as knowledge gaps are uncovered and limitations and constraints need to be addressed and overcome
    • There are countless examples of this in the Apollo program, one of the early ones being the abandonment of the “Nova” rocket design, which involved a massive vehicle that ultimately was eliminated in deference to the multi-stage rocket and lunar lander / command module approach. In this case, the means for arriving at and landing on the moon was completely different than it was at the program’s inception, but the outcome was ultimately the same
    • I spend some time discussing these “points of inflection” in my article On Project Health and Transparency, but the important concept is not to be too prescriptive when planning a transformation effort, because execution will definitely evolve
  • Patience and discipline
    • My underlying assumption is that the level of change involved in transformation is significant and, as such, it will take time to accomplish
    • The balance to be struck is ultimately in managing interim deliveries in relation to the overall goals of the effort. This is where patience and discipline matter, because it is always tempting to take short cuts in the interest of “speed to market” while compromising fundamental design elements that are important to overall quality and program-level objectives (something I address in Fast and Cheap, Isn’t Good)
    • This isn’t to say that tradeoffs can’t or shouldn’t be made, because they often are, but rather that these be conscious choices, done through a governance process, and with a full understanding of the implications of the decisions on the ultimate transformation objectives
  • A relentless focus on delivery
    • The final dimension is somewhat obvious, but is important to mention, because I’ve encountered transformative efforts in the past that spent so much energy either on structural or theoretical aspects to their “program design” that they actually failed to deliver anything
    • In the case of the Apollo program, part of what makes the story so compelling is the number of times the team needed to innovate to overcome issues that arose, particularly to various design and engineering challenges
    • Again, this is why courageous, committed leadership is so important to transformation. The work is difficult and messy and it’s not for the faint of heart.  Resilience and persistence are required to accomplish great things.

Wrapping Up

Hopefully this article has provided some areas to consider in either mapping out or evaluating the health of a transformational effort.  As I covered in my article On Delivering at Speed, there are always opportunities to improve, even when you deliver a complex or high-risk effort.  The point is to be disciplined and thoughtful in how you approach these efforts, so the bumps that inevitably occur are more manageable and the impact they have are minimized overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 12/29/2024

Ethics and Technology

Overview

Having worked both in consulting and corporate environments for many years and across multiple industries, we’re at an interesting juncture in how technology is leveraged at a macro-level and the broader business and societal impacts of those choices.  Whether that is AI-generated content that could easily be mistaken for “real” events to the leverage of data collection and advanced analytics in various business and consumer scenarios to create “competitive advantage”.

The latter scenario has certainly been discussed for quite a while, whether it is in relation to managing privacy while using mobile devices, trusting a search engine and how they “anonymize” your data before potentially selling it to third parties, or whether the results presented as the outcome of a search (or GenAI request) is objective and unbiased, or being presented with some level of influence given the policies or leanings of the organization sponsoring it.

 

The question to be explored here is: How do we define the ethical use of technology?

 

For the remainder of this article, I’ll suggest some ways to frame the answer in various dimensions, acknowledging that this isn’t a black-and-white issue and the specifics of a situation could make some of the considerations more or less relevant.

 

Considerations

What Ethical Use Isn’t

Before diving into what I believe could be helpful in framing an answer, I wanted to clarify what I don’t consider a valid approach, which is namely the argument used in a majority of cases where individuals or organizations cross a line: We can use technology in this way because it gives us competitive advantage.

Competitive advantage can tend to be an easy argument in the interest of doing something questionable, because there is a direct or indirect financial benefit associated with the decision, thereby clouding the underlying ethics of the decision itself.  “We’re making more money, increasing shareholder value, managing costs, increasing profitability, etc.” are things that tend to move the needle in terms of the roadblocks that can exist in organizations and mobilizing new ideas.  The problem is that, with all of the data collected in the interest of securing approval and funding for initiative, I haven’t seen many cases where there is a proverbial “box to check” in terms of the effort conforming to ethical standards (whether that’s specific to technology use or otherwise).

 

What Ethical Use Could Be

That point having been stated, below are some questions that could be considered as part of an “ethical use policy”, understanding that not all may have equal weight in an evaluation process.

They are:

  • Legal/Compliance/Privacy
    • Does the ultimate solution conform to existing laws and regulations for your given industry?
    • Is there any pending legislation related to the proposed use of technology that could create such a compliance issue?
    • Is there any industry-specific legislation that would suggest a compliance issue if it were logically applied in a new way that relates to the proposed solution?
    • Would the solution cause a compliance issue in another industry (now or in legislation that is pending)? Is there risk of that legislation being applied to your industry as well?

 

  • Transparency
    • Is there anything about the nature of the solution that, were it shared openly (e.g., through a press release or industry conference/trade show) would cause customers, competitors, or partners/suppliers to raise issues with the organization’s market conduct or end user policies? This can be a tricky item given the previous points on competitive advantage and what might be labeled as “trade secret” but potentially violate anti-trust, privacy, or other market expectations
    • Does anything about the nature of the solution, were it to be shared openly, suggest that it could cause trust issues between customers, competitors, suppliers, or partners with the organization and, if so, why?

 

  • Cultural
    • Does the solution align to your organization’s core values? As an example, if there is a transparency concern (above) and “Integrity” is a core value (which it is in many organizations), why does that conflict exist?
    • Does the solution conform to generally accepted practices or societal norms in terms of business conduct between you and your target audience (customers, vertical or horizontal partners, etc.)?

 

  • Social Responsibility
    • Does the solution create any potential issues from an environmental, societal, or safety standpoint that could have adverse impacts (direct or indirect)?

 

  • Autonomy and Objectivity
    • Does the solution provide an unbiased, fact-based (or analytically-correct) outcome, free of any potential bias, that can also be governed, audited, and verified? This is an important dimension to consider given the dependency we have on automation continues to increase and we want to be able to trust the security, reliability, accuracy, and so on of what that technology provides.

 

  • Competitive
    • If a competitor announced they were developing a solution of exactly the same nature as what is proposed, would it be comfortable situation or something that you would challenge as unethical or unfair business practice in any way? Quite often, the lens through which unethical decisions are made is biased with an internal focus.  If that line of sight were reversed and a competitor was open about doing exactly the same thing, would that be acceptable or not?  If there would be issues, likely there might be cause for concern in developing the solution yourself

 

Wrapping Up

From a process standpoint, a suggestion would be to take the above list and discuss it openly in the interest of not only determining the right criteria for you, but also to establish where these opportunities exist (because they do and will, the more analytics and AI-focused capabilities advance).  Ultimately, there should be a check-and-balance process for ethical use of technology in line with any broader compliance and privacy-related efforts that may exist within an organization today.

Ultimately, the “right thing to do” can be a murky and difficult question to answer, especially with ever-expanding tools and technologies that create capabilities a digital business can use to its advantage.  But that’s where culture and values should still exist, not simply because there is or isn’t a compliance issue, but because reputations are made and reinforced over time through these kinds of decisions, and they either help build a brand or can damage it when the right questions aren’t explored at the right time.

It’s interesting to consider, as a final note, that most companies have an “acceptable use of IT” policy for employees, contractors, and so forth, in terms of setting guidelines for what they can or can’t do (e.g., accessing ‘prohibited’ websites / email accounts or using a streaming platform while at work), but not necessarily for technology directed outside the organization.  As we enter a new age of AI-enabled capabilities, perhaps it’s a good time to look at both.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/25/2024

Bringing AI to the End User

Overview

In my recent article on Exploring Artificial Intelligence, I covered several dimensions of how I think about the direction of AI, including how models will evolve from general-purpose and broad-based to be more value-focused and end consumer-specific as the above diagram is intended to illustrate.

The purpose of this article is to dive a little deeper into a mental model for how I believe the technology could become more relevant and valuable in an end-user (or end consumer) specific context.

Before that, a few assertions related to the technology and end user application of the technology:

  • The more we can passively collect data in the interest of simplifying end-user tasks and informing models, the better. People can be both inconsistent and unreliable in how they capture data.  In reality, our cell phones are collecting massive amounts of data on an ongoing basis that is used to drive targeted advertising and other capabilities to us without our involvement.  In a business context, however, the concept of doing so can be met with significant privacy and other concerns and it’s a shame because, while there is data being collected on our devices regardless, we aren’t able to benefit from it in the context of doing our work
  • Moving from a broad- or persona-based means of delivering technology capabilities to a consumer-specific approach is a potentially significant advancement in enabling productivity and effectiveness. This would be difficult or impossible to achieve without leveraging an adaptive approach that synthesizes various technologies (personalization, customization, dynamic code generation, role-based access control, AI/ML models, LLMs, content management, and so forth) to create a more cohesive and personalized user experience
  • While I am largely focusing on end-user application of the technology, I would argue that the same concepts and approach could be leveraged for the next generation of intelligent devices and digital equipment, such as robotics in factory automation scenarios
  • To make the technology both performant and relevant, part of the design challenge is to continually reduce and refine to level of “model” information that is needed at the next layer of processing so as not to overload the end computing device (presumably a cell phone or tablet) with a volume of data that isn’t required to enable effective action on behalf of the data consumer.

The rest of this article will focus on providing a mental model for how to think about the relationship across the various kinds of models that may make up the future state of AI.

Starting with a “Real World” example

Having spent a good portion of my time off traveling across the U.S., while I had a printed road atlas in my car, I was reminded of the trust I place in Google Maps more than once, particularly when driving through an “open range” gravel road with cattle roaming about in northwest Nebraska on my way to South Dakota.  In many ways, navigation software represents a good starting point for where I believe intelligent applications will eventually go in the business environment.

Maps is useful as a tool because it synthesizes what data is has on roads and navigation options with specific information like my chosen destination, location, speed traps, delays, and accident information that is specific to my potential routes, allowing for a level of customization if I prefer to take routes that avoid tolls and so on.  From an end-user perspective, it provides a next recommended action, remaining contextually relevant to where I am and what I need to do, along with how long it will be both until that action needs to be taken as well as the distance remaining and time I should arrive at my final destination.

In a connected setting, navigation software pulls pieces of its overall model and applies data on where I am and where I’m going, to (ideally) help me get where I’m going as efficiently as possible.  The application is useful because it is specific to me, to my destination, and to my preferred route, and is different than what would be delivered to a car immediately behind me, despite leveraging the same application and infrastructure.  This is the direction I believe we need to go with intelligent applications, to drive individual productivity and effectiveness

Introducing the “Tree of Knowledge” concept

The Overall Model

The visual above is meant to represent the relationship of general-purpose and foundational models to what ultimately are delivered to an end-user (or piece of digital equipment) in a distributed fashion.

Conceptually, I think of the relationship across data sets as if it were a tree. 

  • The general-purpose model (e.g., LLM) provides the trunk that establishes a foundation for downstream analytics
  • Domain-specific models (e.g., RAG) act as the branches that rely on the base model (i.e., the trunk) to provide process- or function-specific capabilities that can span a number of end-user applications, but have specific, targeted outcomes in mind
  • A “micro”-model is created when specific branches of the tree are deployed to an end-user based on their profile. This represents the subset that is relevant to that data consumer given their role, permissions, experience level, etc.
  • The data available at the end point (e.g., mobile device) then provides the leaves that populate the branches of the “micro”-models that have been deployed to create an adaptive model used to inform the end user and drive meaningful and productive action.

The adaptive model should also take into account user preferences (via customization options) and personalization to tune their experience as closely as possible to what they need and how they work.

In this way, the progression of models moves from general to very specific, end-user focused solutions that are contextualized with real-time data much the same as the navigation example above.

It is also worth noting that, in addition to delivering these capabilities, the mobile device (or endpoint) may collect and send data back to further inform and train the knowledge models by domain (e.g., process performance data) and potentially develop additional branches based on gaps that may surface in execution.

Applying the Model

Having set context on the overall approach, there are some notable differences from how these capabilities could create a different experience and level of productivity than today, namely:

  • Rather than delivering content and transactional capabilities based on an end-user’s role and persona(s), those capabilities would be deployed to a user’s device (the branches of the “micro”-model), but synthesized with other information (the “leaves”) like the user’s experience level, preferences, location, training needs, equipment information (in a manufacturing-type context), to generate an interface specific to them that continually evolves to optimize their individual productivity
  • As new capabilities (i.e., “branches”) are developed centrally, they could be deployed to targeted users and their individual experiences would adapt to incorporate in ways that work best for them and their given configuration, without having to relearn the underlying application(s)

Going Back to Navigation

On the last point above, a parallel example would be the introduction of weather information into navigation. 

At least in Google Maps, while there are real-time elements like speed traps, traffic delays, and accidents factored into the application, there is currently no mechanism to recognize or warn end users about significant weather events that also may surface along the route.  In practice, where severe weather is involved, this could represent safety risk to the traveler and, in the event that the model was adapted to include a “branch” for this kind of data, one would hope that the application would behave the same from an end-user standpoint, but with the additional capability integrated into the application.

Wrapping Up

Understanding that we’re still early in the exploration of how AI will change the way we work, I believe that defining a framework for how various types of models can integrate and work across purposes would enable significant value and productivity if designed effectively.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 09/14/2024

Exploring Artificial Intelligence

Overview

It’s impossible to scroll through a news feed and miss the energy surrounding AI and its potential to transform.  The investment in technology as a strategic differentiator is encouraging to see, particularly as a person who thrives on change and innovation. It is, however, also concerning that the ways in which it is often described are reminiscent of other technology advances of the past… CRM, BigData, .com… where there was an immediate surge in spending without a clear set of outcomes in mind, operating approach, or business architecture established for how to leverage it effectively.  Consequently, while a level of experimentation is always good in the interest of learning and exploring, a lot of money and time can be wasted (and technical debt created) without necessarily creating any meaningful business value through the process.

For the purposes of this article, I’m going to focus on five dimensions of AI and how I’m thinking about them:

  • Framing the Problem – Thinking about how AI will be used in practice
  • The Role of Compute – Considering the needs for processing moving forward
  • Revisiting Data Strategy – Putting AI in the context of the broader data landscape
  • Simplifying “Intelligence” – Exploring the end user impact of AI
  • Thinking About Multi-Cloud – Contemplating how to approach AI in a distributed environment

This topic is very extensive, so I’ll try to keep the thoughts at a relatively high-level to start and dive into more specifics in future articles as appropriate.

Framing the Problem

Considering the Range of Opportunities

While a lot of the attention surrounding Generative AI over the last year has been focused on content generation for research, communication, software development, and other purposes, I believe the focus for how AI can create business value will shift substantially to be more outcome-driven and directed at specific business problems.  In this environment, smaller, more focused data sets (e.g., incorporating process, market, equipment, end user, and environmental data) will be analyzed to understand causal relationships in the interest of producing desired business outcomes (e.g., optimizing process efficiency, improving risk management, increasing safety) and content (e.g., just in time training, adaptive user experiences).  Retrieval-Augmented Generation (RAG) models are an example of this today, with a purpose-built model leveraging a foundational large language model to establish context for a more problem-specific solution.

This is not to suggest that general purpose models will decline in utility, but rather that I believe those applications will be better understood, mature, and become integrated where they create the most value (in relatively short order).  The focus will then shift towards areas where more direct business value can be obtained through an evolution of these technologies.

For that to occur, the fundamentals of business process analysis need to regain some momentum to overcome the ‘silver bullet’ mentality that seems largely prevalent with these technologies today.  It is, once again, a rush towards “the cool versus the useful” towards my opening remark about how current AI discussions feel a lot like conversations at the start of the .com era, and the sooner we shift towards a disciplined approach to leveraging these technology advancements, the better.

The opportunity will be to look at how we can leverage what these models provide, in terms of understanding multi-dimensional relationships across large sets of data, but then extending the concept to become more deterministic in terms of what decisions under a given set of conditions are most likely to bring about desired outcomes (i.e., causal models).  This is not where we are today, but is where I believe these technologies are meant to go in the near future.  Ultimately, we don’t just want to produce content, we want to influence processes and business results with support from artificial intelligence.

As purpose-built models evolve, I believe there will be a base set of business insights that are made available across communities of end users, and then an emergence of secondary insights that are developed in a derivative fashion.  In this way, rather than try to summit Mount Everest in a direct ascent, we will establish one of more layers of outcomes (analogous to having multiple base camps) that facilitate the eventual goal.

Takeaways

  • General purpose AI and large language models (LLMs) will continue to be important and become integrated with how we work and consume technology, but reach a plateau of usefulness fairly rapidly in the next year or so
  • Focus will shift towards integrating transactional, contextual, and process data with the intention of predicting business outcomes (causal AI) in a much more targeted way
  • The overall mindset will pivot from models that do everything to ones that do something much more specific, with a desired outcome in mind up front

The Role of Compute

Considering the Spectrum of Needs

Having set the context of general versus purpose-built AI and the desire to move from content to more outcome-focused intelligence, the question is what this means for computing.  There is a significant amount of attention going to specialized processors (e.g., GPUs) at the moment, with the presumption that there are significant computing requirements to generate models based on large sets of data in a reasonable amount of time.  That being said, while content-focused outcomes may be based on a large volume of data and only need to be refreshed on a periodic basis, the more we want AI to assist in the performance of day-to-day tasks, the more we need the insights to be produced on a near-real time basis and available on a mobile device or embedded on a piece of digital equipment.

Said differently, as our focus shifts to smaller and more specific business problems, so should the data sets involved, making it possible to develop purpose-built models on more “standard” computing platforms that are commercially available, where the models are refreshed on a more frequent basis, taking into account relevant environmental conditions, whether that’s production plans or equipment status in manufacturing, market conditions in financial services, or weather patterns or other risk factors in insurance.

The Argument for a “Micro-Model”

Assuming a purpose-built model can be developed with a particular business outcome or process in mind, where things could take an interesting leap forward would be extending those models into edge computing environments, like a digital worker in a manufacturing facility, where the specific end user knowledge and skills, geo-location, environmental conditions, equipment status could be fed into a purpose-built model and then extended to create a more adaptive model that provides a user-specific set of insights and instructions to drive productivity, safety, and effectiveness.

Ultimately, AI needs to be focused on the individual and run on something as accessible as a mobile device to truly realize it’s potential.  The same would also be true for extending models that could be embedded within a piece of industrial equipment to run as part of a digital facility.  That is beyond anything we can do today, it makes insights personalized and specific to an individual, and that concept holds a significant amount more business value than targeting a specific user group or persona from an application development standpoint.  Said differently, the concept is similar to integrating personalization, workflow, presentation, and insights into one integrated technology.

With this in mind, perhaps the answer will ultimately still result in highly specialized computing, but before rushing in the direction of quantum computing and buying a significant number of GPUs, I’d definitely consider the ultimate outcome we want, which is to put the power of insights in the hands of end users in day-to-day activities, but being much more effective in what they are able to do.  That is not a once-a-month refresh of a massive amount of data.  It is a constantly evolving model that is based on learnings from the past, but the current realities and conditions of the moment and the specific individual taking action on those things.

Takeaways

  • Computing requirements will shift from centralized processing of large data volumes to smaller, curated data sets that are refreshed more often and targeted to specific business goals
  • Ultimately, the goal should be to enable end users with a highly personalized model that is focused on them, the tasks they need to accomplish, and the current conditions under which they are operating
  • Processing for artificial intelligence will therefore be distributed across a spectrum of environments from large scale centralized methods to distributed edge appliances and mobile devices

Revisiting Data Strategy

Business Implications of AI

The largest risk with artificial intelligence that I see today is no different than anything else in regards to data: the value of new technology is only as good as the underlying data quality, and that’s a business issue (for the most part).

Said differently, in the case of AI, if the underlying data sets upon which models are developed has data quality issues and there is a lack of data management and data governance in place, the inferences drawn will likely be of limited value.

Ultimately, the more we move from general purpose to purpose-built solutions, the ability to identify relevant and necessary data to be incorporated into a model can be a significant accelerator of value.  This is because the “give me all the data” approach would likely both increase time to develop and produce models as well as introduce significant overhead in ensuring data quality and governance to confirm the usefulness of the resulting models.

If, as an example, I wanted to use AI to ingest all the training materials developed across a set of manufacturing facilities in the interest of synthesizing, standardizing, and optimizing them across an enterprise, the underlying quality of those materials becomes critically important in deriving the right outcomes.  There may be out of date procedures, unique steps specific to a location, quality issues in some of the source content, etc.  The technology itself doesn’t solve these issues. Arguably, a level of data wrangling or quality tools could be helpful in identifying and surfacing these issues, but the point is that data governance and curation are required before the infrastructure would produce the desired business outcomes.

Technology Implications of AI

As the diagram intends to indicate, whether AI lives as a set of intelligent agents that run as separate stove pipes in parallel with existing applications and data solutions, the direction for how things will evolve is an important element of data strategy to consider, particularly in a multi-cloud environment (something I’ll address in the final section).

As discussed in The Intelligent Enterprise, I believe that the eventual direction for AI (as we already see somewhat evidenced with Copilot in Microsoft Office 365), is to move from separate agents and data apps (“intelligent agents”) to having those capabilities integrated into the workflow of applications themselves (making them “intelligent applications”), where they can create the most overall value.

What this suggests to me is that transaction data from applications will make its way into models, and be exposed back into consuming applications via AI services.  Whether the data ultimately moves into a common repository that can handle both the graph and relational data within the same data solution remains to be seen, but having personally developed an integrated object- and relational-database for a commercial software package thirty years ago at the start of my career, I can foresee that there may be benefits in thinking through the value of that kind of solution.

Where things get more complicated on an enterprise level is when you scale these concepts out. I will address the end user and multi-cloud aspects of this in the next two sections, but it’s critically important in data strategy to consider how too many point solutions in the AI domain could significantly increase cost and complexity (not to mention have negative quality consequences).  As data sets and insights are meant to extend outside an individual application to cross-application and cross-ecosystem levels, the ways in which that data is stored, accessed, and exposed likely will become significant.  My article on Perspective on Impact-Driven Analytics attempted to establish a layered approach to how to think about the data landscape, from production to consumption, that may provide a starting point for evaluating alternatives in this regard.

Takeaways

  • While AI provides new technology capabilities, business ownership / stewardship of data and the processes surrounding data quality, data management, and data governance are extremely critical in an AI-enabled world
  • As AI capabilities move within applications, the need to look across applications for additional insights and optimization opportunities will emerge. To the extent that can be designed and architected in a consistent approach, it will be significantly more cost-effective and create more value over time at an enterprise level
  • Experimentation is appropriate in the AI domain for the foreseeable future, but it is important to consider how these capabilities will ultimately become integrated with the application and data ecosystems in medium to larger organizations in the interest of getting the most long-term value from the investments

Simplifying “Intelligence”

Avoiding the Pitfalls of Introducing New Technologies to End Users

The diagram above is meant to help conceptualize what could ultimately occur if AI capabilities are introduced as various data apps or intelligent agents, running separate from applications versus becoming an integrated part of the way intelligent applications behave over time.

At an overall level, expecting users to arbitrate new capabilities without integrating them thoughtfully into the workflow and footprint that exists creates the conditions for significant change management and productivity issues.  This is always true when introducing change, but the expectations associated with the disruptive potential of AI (at the moment) are quite high, and that could set the stage for disappointment if there isn’t a thoughtful design in place for how the solutions are meant to make the consumer more effective on a workflow and task level.

Takeaways

  • Intelligence capabilities will move inside applications rather than be adjacent to them, providing more of a “guided path” approach to end users
  • To the degree that “micro-models” are eventually in place, that could include making the presentation layer of applications personalized to the individual user based on their profile, experience level, role, and operating conditions
  • The role of “Intelligent Agents” will take on a higher-level, cross-application focus, which could be (as an example) optimizing notifications and alerts coming from various applications to a more thoughtful set of prioritized actions intended to maximize individual performance

Thinking About Multi-Cloud

Working Across Environments

With the introduction of AI capabilities at an enterprise level, the challenge becomes how to leverage and integrate these technologies, particularly given that data may exist across a number of hosted and cloud-based environments.  For simplicity’s sake, I’m going to assume that any cloud capability required for data management and AI services can be extended to the edge (via containers), though that may not be fully true today.

At an overall level, as it becomes desirable to extend models to include data resident both in something like Microsoft Office 365 (running on Azure) and corporate transactional data (largely running in AWS if you look at market share today), the considerations and costs for moving data between platforms could be significant if not architected in a purposeful manner.

To that end, my suggestion is to look at business needs in one of three ways:

  1. Those that can be addressed via a single cloud platform, in which case it would likely be appropriate to design and deliver solutions leveraging the AI capabilities available natively on that platform
  2. To the extent a solution extends across multiple providers, it may be possible to look at layering the solutions such that each cloud platform performs a subset of the analysis, resulting in pre-processed data that could then be published to a centralized, enterprise cloud environment where the various data sets are pulled into a single enterprise model that is used to address the overall need
  3. If a partitioning approach isn’t possible, then some level of cost, capability, and performance analysis would likely make sense to determine where data should reside to enable the necessary integrated models to be developed

Again, the point is to step back from individual solutions and projects to consider the enterprise strategy for how data will be managed, models will be developed, and deployed overall.  The alternative approach of deploying too many point solutions could lead to considerable cost and complexity (i.e., technical debt) over time.

Takeaways

  • AI capabilities are already available on all the major cloud platforms. I believe they will reach relative parity from a capability standpoint in the foreseeable future, to the point that they shouldn’t be a primary consideration in how data and models are managed and deployed
  • The more the environment can be designed with standards in mind, modularity, integration, interoperability, and a level of composability, the better.  Technology solutions will continue to be introduced that an organization will want to leverage without having to abandon or migrate everything that is already in place
  • It is extremely probably that AI models will be deployed across cloud platforms, so having a deliberate strategy for how to manage and facilitate this should be given consideration
  • A lack of overall multi-cloud strategy will likely create complexity and cost that may be difficult to unwind over time

Wrapping Up

If you’ve made it this far, thank you for taking the time, hopefully some of the concepts were thought provoking.  In Excellence by Design, I talk about ‘Relentless Innovation’…

Admittedly, there is so much movement in this space, that it’s very possible some of what I’ve written is obsolete, obvious, far-fetched, or some combination of all of the above, but that’s also part of the point of sharing the ideas: to encourage the dialogue.  My experience in technology over the last thirty-two years, especially with emerging capabilities like artificial intelligence, is that we can lose perspective on value creation in the rush to adopt something new and the tool becomes a proverbial hammer in search of a nail.

What would be far better is to envision a desired end state, identify what we’d really like to be able to do from a business capability standpoint, and then endeavor to make that happen with advanced technology.  I do believe there is significant power in these capabilities for the organizations that leverage them effectively.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/26/2024

Thoughts on Portfolio Management

Overview

Setting the stage

Having had multiple recent discussions related to portfolio management, I thought I’d share some thoughts relative to disciplined operations, in terms of the aforementioned subject and on the associated toolsets as well.  This is a substantial topic, but I’ll try to hit the main points and address more detailed questions as and when they arise.

In getting started, given all the buzz around GenAI, I asked ChatGPT “What are the most important dimensions of portfolio management in technology?”  What was interesting was that the response aligned with most discussions I’ve had over time, which is to say that it provided a process-oriented perspective on strategic alignment, financial management, and so on (a dozen dimensions overall), with a wonderfully summarized description of each (and it was both helpful and informative).  The curious part was that it missed the two things I believe are most important: courageous leadership and culture.

The remainder of this article will focus more on the process dimensions (I’m not going to frame it the same as ChatGPT for simplicity), but I wanted to start with a fundamental point: these things have to be about partnership and value first and process second.  If the focus becomes the process, there is generally something wrong in the partnership or the process is likely too cumbersome in how it is designed (or both).

 

Portfolio Management

Partnership

Portfolio management needs to start with a fundamental partnership and shared investment between business and technology leaders on the intended outcome.  Fortunately, or unfortunately, where the process tends to get the most focus (and part of why I’ve heard it so much in the last couple years) is in a difficult market/economy where spend management is the focus, and the intention is largely related to optimizing costs.  Broadly speaking, when times are good and businesses grow, the processes for prioritization and governance can become less rigorous in a speed-to-market mindset, the demand for IT services increases, and a significant amount of inefficiency, delivery and quality issues can arise as a result.  The reality is that discipline should always be a part of the process because it’s in the best interest of creating value (long- and short-term) for an organization.  That isn’t to suggest artificial constraints, unnecessary gates in a process, or anything to hinder speed-to-market.  Rather, the goal of portfolio management should be to have a framework in place to manage demand through delivery in a way that facilitates predictable, timely, and quality delivery and a healthy, secure, robust, and modern underlying technology footprint that creates significant business value and competitive advantage over time.  That overall objective is just as relevant during a demand surge as it is when spending is constrained.

This is where courageous leadership becomes the other critical overall dimension.  It’s never possible to do everything and do it well.  The key is to maintain the right mix of work, creating the right outcomes, at a sustainable pace, with quality.  Where technology leaders become order takers is where a significant amount of risk can be introduced that actually hurts a business over time.  The primary results being that taking on too much without thoughtful planning can result in critical resources being spread too thin, missed delivery commitments, poor quality, and substantial technical debt, all of which eventually undermine the originally intended goal of being “responsive”.  This is why partnership and mutual investment in the intended outcomes matters.  Not everything has to be “perfect” (and the concept itself doesn’t really exist in technology anyway), but the point is to make conscious choices on where to spend precious company resources to optimize the overall value created.

 

End-to-End Transparency

Shifting focus from the direction to the execution, portfolio management needs to start with visibility in three areas:

  • Demand management – the work being requested
  • Delivery monitoring – the work being executed
  • Value realization – the impact of what was delivered

In demand management, the focus should ideally be on both internal and external factors (e.g., business priorities, customer needs, competitive and industry trends), a thoughtful understanding of the short- and long-term value of the various opportunities, the requirements (internal and external) necessary to make them happen, and the desired timeframe for those results to be achieved.  From a process standpoint, notice of involvement and request for estimate (RFE) processes tend to be important (depending on the scale and structure of an organization), along with ongoing resource allocation and forecast information to evaluate these opportunities as they arise.

Delivery monitoring is important, given the dependencies that can and do exist within and across efforts in a portfolio, the associated resource needs, and the expectations they place on customers, partners, or internal stakeholders once delivered.  As and when things change, there should be awareness as to the impact of those changes on upcoming demand as well as other efforts within a managed portfolio.

Value realization is a generally underserved, but relatively important part of portfolio management, especially in spending constrained situations.  This level of discipline (at an overall level) is important for two primary reasons: first, to understand the efficacy of estimation and planning processes in the interest of future prioritization and planning and, second, to ensure investments were made effectively in the right priorities.  Where there is no “retrospective”, a lot of learnings may be being lost in the interest of continuous improvement and operational efficiency and effectiveness over time (ultimately having an adverse impact on business value created).

 

Maintaining a Balanced Portfolio

Two concepts that I believe are important to consider in how work is ultimately allocated/prioritized within an IT portfolio:

  • Portfolio allocation – the mix of work that is being executed on an ongoing basis
  • Prioritization – how work is ultimately selected and the process for doing so

A good mental model for portfolio allocation is a jigsaw puzzle.  Some pieces fit together, others don’t, and whatever pieces are selected, you ultimately are striving to have an overall picture that matches what you originally saw “on the box”.  While you also can operate in multiple areas of a puzzle at the same time, you also generally can’t focus in on all of them concurrently and expect to be efficient on the whole.

What I believe a “good” portfolio should include is four key areas (with an optional fifth):

  • Innovation – testing and experimenting in areas where you may achieve significant competitive advantage or differentiation
  • Business Projects – developing solutions that create or enable new or enhanced business capabilities
  • Modernization – using an “urban renewal” mindset to continue to maintain, simplify, rationalize, and advance your infrastructure to avoid significant end of life, technical debt, or other adverse impacts from an aging or diverse technology footprint
  • Security – continuing to leverage tools and technologies that manage the ever increasing exposure associated with cyber security threats (internal and external)
  • Compliance (where appropriate) – investing in efforts to ensure appropriate conformance and controls in regulatory environments / industries

I would argue that, regardless of the level of overall funding, these categories should always be part of an IT portfolio.  There can obviously be projects or programs that provide forward momentum in more than one category above, but where there isn’t some level of investment in the “non-business project” areas, likely there will be a significant correction needed at some point of time that could be very disruptive from a business standpoint.  It is probably also worth noting that I am not calling out a “technology projects” category above on purpose.  From my perspective, if a project doesn’t drive one of the other categories, I’d question what value it creates.  There is no value in technology for technology’s sake.

From a prioritization standpoint, I’ve seen both ends of the spectrum over the course of time: environments where there is no prioritization in place and everything with a positive business case (and even some without) are sent into execution to ones where there is an elaborate “scoring” methodology, with weights and factors and metrics organized into highly elaborate calculations that create a false sense of “rigor” in the efficacy of the process.  My point of view overall is that, with the above portfolio allocation model in place, ensuring some balance in each of the critical categories of spend, a prioritization process should include some level of metrics, with an emphasis on short- and long-term business/financial impact as well as a conscious integration of the resource commitments required to execute the effort by comparison with other alternatives.  As important as any process, however, is the discussions that should be happening from a business standpoint to ensure the engagement, partnership, and overall business value being delivered through the portfolio (the picture on the box) in the decisions made.

 

Release Management

Part of arriving at the right set of work to do also comes down to release management.  A good analogy for release management is the game Tetris.  In Tetris, you have various shaped blocks dropping continually into a grid, with the goal of rotating and aligning them to fit as cleanly with what is already on the radar as possible.  There are and always will be gaps and the fit will never be perfect, but you can certainly approach Tetris in a way that is efficient and well-aligned or in a way that is very wasteful of the overall real estate with which you have to work

This is great mental model for how project planning should occur.  If you do a good job, resources are effectively utilized, outcomes are predictable, there is little waste, and things run fairly smoothly.  If you don’t think about the process and continually inject new work into a portfolio without thoughtful planning as to dependencies and ongoing commitments, there can and likely will be significant waste, inefficiency, collateral impact, and issues in execution.

Release management comes down to two fundamental components:

  • Release strategy – the approach to how you organize and deliver major and minor changes to various stakeholder groups over time
  • Release calendar – an ongoing view of what will be delivered at various times, along with any critical “T-minus” dates and/or delivery milestones that can be part of a progress monitoring or gating process used in conjunction with delivery governance processes

From a release strategy standpoint, it is tempting in a world of product teams, DevSecOps, and CI/CD pipelines to assume everything comes down to individual product plans and their associated release schedules.  The two primary issues here are the time and effort it generally takes to deploy new technology and the associated change management impact to the end users who are expected to adopt those changes as and when they occur.  The more fragmented the planning process, the more business risk there is that ultimately end users or customers will be either under or overserved at any given point in time, where a thoughtful release strategy can help create predictable, manageable, and sustainable levels of change over time across a diverse set of stakeholders being served.

The release calendar, aside from being an overall summary of what will be delivered when and to whom, also should ideally provide transparency into other critical milestones in the major delivery efforts so that, in the event something moves off plan (which is a very normal occurrence in technology and medium to larger portfolios), the relationship to other ongoing efforts can be evaluated from a governance standpoint to determine whether any rebalancing or slotting of work is required.

 

Change Management

While I won’t spend a significant amount of time on this point, change management is often an area where I’ve seen the process managed very well and relatively poorly.  The easy part is generally managing change relative to a specific project or program and that governance often exists in my experience.  The issue that can arise is when the leadership overseeing a specific project is only taking into account the implications of change on that effort alone, and not the potential ripple effect of a schedule, scope, or financial adjustment on the rest of the portfolio, future demand, or on end users in the event that releases are being adjusted

 

On Tooling

Pivoting from processes to tools, at an overall level, I’m generally not a fan of over-engineering the infrastructure associated with portfolio management.  It is very easy for such an infrastructure to take a life of its own, become a significant administrative burden that creates little value (beyond transparency), or contain outdated and inaccurate information to the degree that the process involves too much data without underlying ownership and usage of the data obtained.

The goal is the outcome, not the tools.

To the extent that a process is being established, I’d generally want to focus on transparency (demand through delivery) and a healthy ongoing discussion of priorities in the interest of making informed decisions.  Beyond that, I’ve seen a lot of reporting that doesn’t generally result in any level of actions being taken, which I consider to be very ineffective from a leadership and operational standpoint. 

Again, if the process is meant to highlight a relationship problem, such as a dashboard being created requiring a large number of employees to capture timesheets to be rolled up, marked to various projects, all to have a management discussion to say “we’re over allocated and burning out our teams”, my question would be why all of that data and effort was required to “prove” something, whether there is actual trust and partnership, whether there are other underlying delivery performance issues, and so on.  The process and tools are there to enable effective execution and the creation of business value, not drain effort and energy that could better be applied in delivery with administrivia.

 

Wrapping Up

Overall, having spent a number of years seeing well developed and executed processes as well as less robust versions of the same, effective portfolio management comes down to value creation.  When the focus becomes about the process, the dashboard, the report, the metrics, something is amiss in my experience.  It should about informing engaged leadership, fostering partnership, enabling decisions, and creating value.  That is not to say that average utilization of critical resources (as an example) isn’t a good thing to monitor and keep in mind, but it’s what you do with that information that matters.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/29/2024

Enterprise Architecture in an Adaptive World

Overview

Having covered a couple future-oriented topics on Transforming Manufacturing and The Future of IT, I thought it would good to come back to where we are with Enterprise Architecture as a critical function for promoting excellence in IT.

Overall, there is a critical balance to be struck in technology strategy today: technology-driven capabilities are advancing faster than any organization can reasonably adopt and integrate them (as is the exposure in cyber security), even if you could, the change management issues you’d cause on end users would be highly disruptive, and thereby undermine your desired business outcomes, and, in practice rapidly evolving, sustainable change is the goal, not any one particular “implementation” of the latest thing.  This is what Relentless Innovation is about, referenced in my article on Excellence by Design.

 

Connecting Architecture back to Strategy

In the article, Creating Value Through Strategy, I laid out a framework for thinking about IT strategy at an overall level that can be used to create some focal points for enterprise architecture efforts in practice, namely:

  • Innovate – leveraging technology advancements in ways that promote competitive advantage
  • Accelerate – increasing speed to market/value to be more responsive to changing needs
  • Optimize – improving the value/cost ratio to drive return on technology investments overall
  • Inspire – creating a workplace that promotes retention and enables the above objectives
  • Perform – ensuring reliability, security, and performance in the production environment

The remainder of this article will focus on how enterprise architecture (EA) plays a role in enabling each of these dimensions given the pace of change today.

 

Breaking it Down

Innovate

Adopting new technologies for maximum business advantage is certainly the desired end game in this dimension, but unless there is a very unique, one-off situation, the role of EA is fairly critical in making these advancements leverageable, scalable, and sustainable.  It’s worth noting, by the way, that I’m specifically referring to “enterprise architecture” here, not “solution architecture”, which I would consider to be the architecture and design of a specific business solution.  One should not exist without the other and, to the degree that solution architecture is emphasized without a governing enterprise architecture framework in place, the probability of significant technical debt, delivery issues, lack of reliability, and a host of other issues will skyrocket.

Where EA plays a role in promoting innovation is minimally in exploring market trends and looking for enabling technologies that can promote competitive advantage, but also, and very critically in establishing the standards and guidelines by which new technologies should be introduced and integrated into the existing environment.

Using a “modern” example, I’ve seen a number of articles of late on the role of GenAI in “replacing” or “disrupting” application development, from the low-code/no code type solutions to the SaaS/package software domain, to everywhere.  While this sounds great in theory, it shouldn’t take long for the enterprise architecture questions to surface:

  • How do I integrate that accumulated set of “point solutions” in any standard way?
  • How do I meaningfully run analytics on the data associated with these applications?
  • How do I secure these applications in a way that I’m not exposed to vulnerabilities that I would with any open-source technology (i.e., they are generated by an engine that may have inherent security gaps)?
  • How do I manage the interoperability between these internally-developed/generated solutions and standard packages (ERP, CRM, etc.) that are likely a core part of any sizeable IT environment?

In the above example, even if I find way to replace existing low-code/no code solutions with a new technology, it doesn’t mean that I don’t have the same challenges as exist with leveraging those technologies today.

In the case of innovation, the highest priorities for EA are therefore: looking for new disruptive technologies in the market, defining standards to enable their effective introduction and use, and then governing that delivery process to ensure standards are followed in practice.

 

Accelerate

Speed to market is a pressing reality in any environment I’ve seen, though it can lead to negative consequences as I discussed in Fast and Cheap… Isn’t GoodCertainly, one of the largest barriers to speed is complexity, and complexity can come in many forms depending on the makeup of the overall IT landscape, the standards, processes, and governance in place related to delivery, and the diversity in solutions, tools, and technologies that are involved in the ecosystem as a whole.

While I talk about standards, reuse, and governance in the broader article on IT strategy, I would argue that the largest priority for EA in terms of accelerating delivery is in rationalization of solutions, tools, and technologies in use overall.

The more diverse the enterprise ecosystem is, the more difficult it becomes to add, replace, or integrate new solutions over time, and ultimately this will slow delivery efforts down to a snail’s pace (not to mention making them much more expensive and higher risk over time).

Using an example of a company that has performed many acquisitions over time, looking for opportunities to simplify and standardize core systems (e.g., moving to a single ERP versus having multiple instances and running consolidations through a separate tool) can lead to significant reduction in complexity over time, not to mention making it possible to redeploy resources to new capability development versus being spread across multiple redundant production solutions.

 

Optimize

In the case of increasing the value/cost ratio, the ability to rationalize tools and solutions should definitely lead to reduced cost of ownership (beyond the delivery benefit mentioned above), but the largest priority should be in identifying ways to modernize on a continual basis.

Again, in my experience, modernization is difficult to prioritize and fund until there is an end-of-life or end-of-support scenario, at which point it becomes a “must do” priority, and causes a significant amount of delivery disruption in the process.

What I believe is a much better and healthier approach to modernization is a more disciplined, thoughtful approach that is akin to “urban renewal”, where there is an annual allocation of work directed at modernization on a prioritized basis (the criteria for which should be established through EA, given an understanding of other business demand), such that significant “events” are mitigated and it becomes a way of working on a sustained basis.  In this way, the delineation between “keep the lights on” (KTLO) support, maintenance (which is where modernization efforts belong), and enhancement/ build-related work is important.  In my experience, that second maintenance bucket is too often lumped into KTLO work, it is underserved/underfunded, and ultimately that creates periodic crises in IT to remediate things that should’ve been addressed far sooner (as a much lower cost) if a more disciplined portfolio management strategy was in place.

 

Inspire

In the interest of supporting the above objectives, having the right culture and skills to support ongoing evolution is imperative.  To that end, the role of EA should be in helping to inform and guide the core skills needed to “lean forward” into advanced technology, while maintaining the right level of competency to support the footprint in place.

Again, this is where having a focus on modernization can help, as it creates a means to sunset legacy tools and technologies, to enable that continuous evolution of the skills the organization needs to operate (whether internally or externally sourced).

 

Perform

Finally, the role of EA in the production setting could be more or less difficult depending on how well the above capabilities are defined and supported in an enterprise.  To the degree standards, rationalization, modernization, and the right culture and skills are in place, the role of EA would be helping to “tune” the environment to perform better and at a lower cost to operate.

Where there is a priority need for EA is ensuring there is an integrated approach to cyber security that aligns to development processes (e.g., DevSecOps) and a comprehensive, integrated strategy to monitor and manage performance in the production environment so that production incidents (using ITIL-speak) can be minimized and mitigated to the maximum degree possible.

 

Wrapping Up

Looking back on the various dimensions and priorities outlined above in relation to the role of EA, perhaps there isn’t much that I can argue is very different than what the role entailed five or ten years ago… establish standards, simplify / rationalize, modernize, retool, govern… that being said, the pace at which these things need to be accomplished and the criticality of doing them well is more important than ever with the increasing role technology plays in the digital enterprise.  Like other dimensions required to establish excellence in IT, courageous leadership is where this needs to start, because it takes discipline to do things “right” while still doing them at a pace and with an agility that discerns the things that matter to an enterprise versus those that are simply ivory tower thinking.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/27/2024

The Future of IT

Overview

Background

I’ve been thinking about writing this article for a while, with the premise of “what does IT look like in the future?”  In a digital economy, the role of technology in The Intelligent Enterprise will certainly continue to be creating value and competitive business advantage.  That being said, one can reasonably assume a few things that are true today for medium to large organizations will continue to be part of that reality as well, namely:

  • The technology footprint will be complex and heterogenous in its makeup. To the degree that there is a history of acquisitions, even more so
  • Cost will always be a concern, especially to the degree it exceeds value delivered (this is explored in my article on Optimizing the Value of IT)
  • Agility will be important in adopting and integrating new capabilities rapidly, especially given the rate of technology advancement only appears to be accelerating over time
  • Talent management will be complex given the variety of technologies present will be highly diverse (something I’ve started to address in my Workforce and Sourcing Strategy Overview article)

My hope is to provide some perspective in this article on where I believe things will ultimately move in technology, in the underlying makeup of the footprint itself, how we apply capabilities against it, and how to think about moving from our current reality to that environment.  Certainly, all of the five dimensions of what I outlined in my article on Creating Value Through Strategy will continue to apply at an overall strategy level (four of which are referenced in the bullet points above).

A Note on My Selfish Bias…

Before diving further into the topic at hand, I want to acknowledge that I am coming from a place where I love software development and the process surrounding it.  I taught myself to program in the third grade (in Apple Basic), got my degree in Computer Science, started as a software engineer, and taught myself Java and .Net for fun years after I stopped writing code as part of my “day job”.  I love the creative process for conceptualizing a problem, taking a blank sheet of paper (or white board), designing a solution, pulling up a keyboard, putting on some loud music, shutting out distractions, and ultimately having technology that solves that problem.  It is a very fun and rewarding thing to explore those boundaries of what’s possible and balance the creative aspects of conceptual design with the practical realities and physical constraints of technology development.

All that being said, insofar as this article is concerned, when we conceptualize the future of IT, I wanted to put a foundational position statement forward to frame where I’m going from here, which is:

Just because something is cool and I can do it, doesn’t mean I should.

That is a very difficult thing to internalize for those of us who live and breathe technology professionally.  Pride of authorship is a real thing and, if we’re to embrace the possibilities of a more capable future, we need to apply our energies in the right way to maximize the value we want to create in what we do.

The Producer/Consumer Model

Where the Challenge Exists Today

The fundamental problem I see in technology as a whole today (I realize I’m generalizing here) is that we tend to want to be good at everything, build too much, customize more than we should, and throw caution to the wind when it comes to things like standards and governance as inconveniences that slow us down in the “deliver now” environment in which we generally operate (see my article Fast and Cheap, Isn’t Good for more on this point).

Where that leaves us is bloated, heavy, expensive, and slow… and it’s not good.  For all of our good intentions, IT doesn’t always have the best reputation for understanding, articulating, or delivering value in business terms and, in quite a lot of situations I’ve seen over the years, our delivery story can be marred with issues that don’t create a lot of confidence when the next big idea comes along and we want to capitalize on the opportunity it presents.

I’m being relatively negative on purpose here, but the point is to start with the humility of acknowledging the situation that exists in a lot of medium to large IT environments, because charting a path to the future requires a willingness to accept that reality and to create sustainable change in its place.  The good news, from my experience, is there is one thing going for most IT organizations I’ve seen that can be a critical element in pivoting to where we need to be: a strong sense of ownership.  That ownership may show up as frustration in the status quo depending on the organization itself, but I’ve rarely seen an IT environment where the practitioners themselves don’t feel ownership for the solutions they build, maintain, and operate or have a latent desire to make them better.  There may be a lack of a strategy or commitment to change in many organizations, but the underlying potential to improve is there, and that’s a very good thing if capitalized upon.

Challenging the Status Quo

Pivoting to the future state has to start with a few critical questions:

  • Where does IT create value for the organization?
  • Which of those capabilities are available through commercially available solutions?
  • To what degree are “differentiated” capabilities or features truly creating value? Are they exceptions or the norm?

Using an example from the past, a delivery team was charged with solving a set of business problems that they routinely addressed through custom solutions, even though the same capabilities could be accomplished through integration of one or more commercially available technologies.  From an internal standpoint, the team promoted the idea that they had a rapid delivery process, were highly responsive to the business needs they were meant to address, etc.  The problem is that the custom approach actually cost more money to develop, maintain, and support, was considerably more difficult to scale.  Given solutions were also continually developed with a lack of standards, their ability to adopt or integrate any new technologies available on the market was non-existent.  Those situations inevitably led to new custom solutions and the costs of ownership skyrocketed over time.

This situation begs the question: if it’s possible to deliver equivalent business capability without building anything “in house”, why not do just that?

In the proverbial “buy versus build” argument, these are the reasons I believe it is valid to ultimately build a solution:

  • There is nothing commercially available that provides the capability at a reasonable cost
    • I’m referencing cost here, but it’s critical to understand the TCO implications of building and maintaining a solution over time. They are very often underestimated.
  • There is a commercially available solution that can provide the capability, but something about privacy, IP, confidentiality, security, or compliance-related concerns makes that solution infeasible in a way that contractual terms can’t address
    • I mention contracting purposefully here, because I’ve seen viable solutions eliminated from consideration over a lack of willingness to contract effectively, and that seems suboptimal by comparison with the cost of building alternative solutions instead

Ultimately, we create value in business capability enabled through technology, “who” built them doesn’t matter.

Rethinking the Model

My assertion is that we will obtain the most value and acceleration of business capabilities when we shift towards a producer/consumer model in technology as a whole.

What that suggests is that “corporate IT” largely adopts the mindset of the consumer of technologies (specifically services or components) developed by producers focused purely on building configurable, leverageable components that can be integrated in compelling ways into a connected ecosystem (or enterprise) of the future.

What corporate IT “produces” should be limited to differentiated capabilities that are not commercially available, and a limited set of foundational capabilities that will be outlined below.  By trying to produce less and thinking more as a consumer, this should shift the focus internally towards how technology can more effectively enable business capability and innovation and externally towards understanding, evaluating, and selecting from the best-of-breed capabilities in the market that help deliver on those business needs.

The implication, of course for those focused on custom development, would be to move towards those differentiated capabilities or entirely towards the producer side (in a product-focused environment), which honestly could be more satisfying than corporate IT can be for those with a strong development inclination.

The cumulative effect of these adjustments should lead to an influx of talent into the product community, an associated expansion of available advanced capabilities in the market, and an accelerated ability to eventually adopt and integrate those components in the corporate environment (assuming the right infrastructure is then in place), creating more business value than is currently possible where everyone tries to do too much and sub-optimizes their collective potential.

Learning from the Evolution of Infrastructure

The Infrastructure Journey

You don’t need to look very far back in time to remember when the role of a CTO was largely focused on managing data centers and infrastructure in an internally hosted environment.  Along the way, third parties emerged to provide hosting services and alleviate the need to be concerned with routine maintenance, patching, and upgrades.  Then converged infrastructure and the software-defined data center provided opportunities to consolidate and optimize that footprint and manage cost more effectively.  With the rapid evolution of public and private cloud offerings, the arguments for managing much of your own infrastructure beyond those related specifically to compliance or legal concerns are very limited and the trajectory of edge computing environments is still evolving fairly rapidly as specialized computing resources and appliances are developed.  The learning being: it’s not what you manage in house that matters, it’s the services you provide relative to security, availability, scalability, and performance.

Ok, so what happens when we apply this conceptual model to data and applications?  What if we were to become a consumer of services in these domains as well?  The good news is that this journey is already underway, the question is how far we should take things in the interest of optimizing the value of IT within an organization.

The Path for Data and Analytics

In the case of data, I think about this area in two primary dimensions:

  • How we store, manage, and expose data
  • How we apply capabilities to that data and consume it

In terms of storage, the shift from hosted data to cloud-based solutions is already underway in many organizations.  The key levers continue to be ensuring data quality and governance, finding ways to minimize data movement and optimize data sharing (while facilitating near real-time analytics), and establishing means to expose data in standard ways (e.g., virtualization) that enable downstream analytic capabilities and consumption methods to scale and work consistently across an enterprise.  Certainly, the cost of ingress and egress of data across environments is a key consideration, especially where SaaS/PaaS solutions are concerned.  Another opportunity continues to be the money wasted on building data lakes (beyond archival and unstructured data needs) when viable platform solutions in that space are available.  From my perspective, the less time and resources spent on moving and storing data to no business benefit, the more energy that can be applied to exposing, analyzing, and consuming that data in ways that create actual value.  Simply said, we don’t create value in how or where we store data, we create value in how consume it.

On the consumption side, having a standards-based environment with a consistent method for exposing data and enabling integration will lend itself well to tapping into the ever-expanding range of analytical tools on the market, as well as swapping out one technology for another as those tools continue to evolve and advance in their capabilities over time.  The other major pivot being to minimize the amount of “traditional” analytical reporting and business intelligence solutions to more dynamic data apps that leverage AI to inform meaningful end-user actions, whether that’s for internal or external users of systems.  Compliance-related needs aside, at an overall level, the primary goal of analytics should be informed action, not administrivia.

The Shift In Applications

The challenge in the applications environment is arbitrating the balance between monolithic (“all in”) solutions, like ERPs, and a fully distributed component-based environment that requires potentially significant management and coordination from an IT standpoint. 

Conceptually, for smaller organizations, where the core applications (like an ERP suite + CRM solution) represent the majority of the overall footprint and there aren’t a significant number of specialized applications that must interoperate with them, it likely would be appropriate and effective to standardize based on those solutions, their data model, and integration technologies.

On the other hand, the more diverse and complex the underlying footprint is for a medium- to large-size organization, there is value in looking at ways to decompose these relatively monolithic environments to provide interoperability across solutions, enable rapid integration of new capabilities into a best-of-breed ecosystem, and facilitate analytics that span multiple platforms in ways that would be difficult, costly, or impossible to do within any one or two given solutions.  What that translates to, in my mind, is an eventual decline of the monolithic ERP-centric environment to more of a service-driven ecosystem where individually configured capabilities are orchestrated through data and integration standards with components provided by various producers in the market.  That doesn’t necessarily align to the product strategies of individual companies trying to grow through complementary vertical or horizontal solutions, but I would argue those products should create value at an individual component level and be configurable such that swapping out one component of a larger ecosystem should still be feasible without having to abandon the other products in that application suite (that may individually be best-of-breed) as well.

Whether shifting from a highly insourced to a highly outsourced/consumption-based model for data and applications will be feasible remains to be seen, but there was certainly a time not that long ago when hosting a substantial portion of an organization’s infrastructure footprint in the public cloud was a cultural challenge.  Moving up the technology stack from the infrastructure layer to data and applications seems like a logical extension of that mindset, placing emphasis on capabilities provided and value delivered versus assets created over time.

Defining Critical Capabilities

Own Only What is Essential

Making an argument to shift to a consumption-oriented mindset in technology doesn’t mean there isn’t value in “owning” anything, rather it’s meant to be a call to evaluate and challenge assumptions related to where IT creates differentiated value and to apply our energies towards those things.  What can be leveraged, configured, and orchestrated, I would buy and use.  What should be built?  Capabilities that are truly unique, create competitive advantage, can’t be sourced in the market overall, and that create a unified experience for end users.  On the final point, I believe that shifting to a disaggregated applications environment could create complexity for end users in navigating end-to-end processes in intuitive ways, especially to the degree that data apps and integrated intelligence becomes a common way of working.  To that end, building end user experiences that can leverage underlying capabilities provided by third parties feels like a thoughtful balance between a largely outsourced application environment and a highly effective and productive individual consumer of technology.

Recognize Orchestration is King

Workflow and business process management is not a new concept in the integration space, but it’s been elusive (in my experience) for many years for a number of reasons.  What is clear at this point is that, with the rapid expansion in technology capabilities continuing to hit the market, our ability to synthesize a connected ecosystem that blends these unique technologies with existing core systems is critical.  The more we can do this in consistent ways, the more we shift towards a configurable and dynamic environment that is framework-driven, the more business flexibility and agility we will provide… and that translates to innovation and competitive advantage over time.  Orchestration is a critical piece of deciding which processes are critical enough that they shouldn’t be relegated to the internal workings of a platform solution or ERP, but taken in-house, mapped out, and coordinated with the intention of creating differentiated value that can be measured, evaluated, and optimized over time.  Clearly the scalability and performance of this component is critical, especially to the degree there is a significant amount of activity being managed through this infrastructure, but I believe the transparency, agility, and control afforded in this kind of environment would greatly outweigh the complexity involved in its implementation.

Put Integration in the Center

In a service-driven environment, clearly the infrastructure for integration, streaming in particular, along with enabling a publish and subscribe model for event-driven processing, will be critical for high-priority enterprise transactions.  The challenge in integration conversations in my experience tends to be defining the transactions that “matter”, in terms of facilitating interoperability and reuse, and those that are suitable for point-to-point, one off connections.  There is ultimately a cost for reuse when you try to scale, and there is discipline needed to arbitrate those decisions to ensure they are appropriate to business needs.

Reassess Your Applications/Services

With any medium to large organization, there is likely technology sprawl to be addressed, particularly if there is a material level of custom development (because component boundaries likely won’t be well architected) and acquired technology (because of the duplication it can cause in solutions and instances of solutions) in the landscape.  Another complicating factor could be the diversity of technologies and architectures in place, depending on whether or not a disciplined modernization effort exists, the level of architecture governance in place, and rate and means by which new technologies are introduced into the environment.  All of these factors call for a thoughtful portfolio strategy, to identify critical business capabilities and ensure the technology solutions meant to enable them are modern, configurable, rationalized, and integrated effectively from an enterprise perspective.

Leverage Data and Insights, Then Optimize

With analytics and insights being a critical capability to differentiated business performance, an effective data governance program with business stewardship, selecting the right core, standard data sets to enable purposeful, actionable analytics, and process performance data associated with orchestrated workflows are critical components of any future IT infrastructure.  This is not all data, it’s the subset that creates significant business value to justify the investment in making it actionable. As process performance data is gathered through the orchestration approach, analytics can be performed to look for opportunities to evolve processes, configurations, rules, and other characteristics of the environment based on key business metrics to improve performance over time.     

Monitor and Manage

With the expansion of technologies and components, internal and external to the enterprise environment, having the ability to monitor and detect issues, proactively take action, and mitigate performance, security, or availability issues will become increasingly important.  Today’s tools are too fragmented and siloed to achieve the level of holistic understanding that is needed between hosted and cloud-based environments, including internal and external security threats in the process.

Secure “Everything”

While zero trust and vulnerability management risk is expanding at a rate that exceeds an organization’s ability to mitigate it, treating security as a fundamental requirement of current and future IT environments is a given.  The development of a purposeful cyber strategy, prioritizing areas for tooling and governance effectively, and continuing to evolve and adapt that infrastructure will be core to the DNA of operating successfully in any organization.  Security is not a nice to have, it’s a requirement.

The Role of Standards and Governance

What makes the framework-driven environment of the future work is ultimately having meaningful standards and governance, particularly for data and integration, but extending into application and data architecture, along with how those environments are constructed and layered to facilitate evolution and change over time.  Excellence takes discipline and, while that may require some additional investment in cost and time during the initial and ongoing stages of delivery, it will easily pay itself off in business agility, operating cost/ cost of ownership, and risk/exposure to cyber incidents over time.

The Lending Example

Having spent time a number of years ago understanding and developing strategy in the consumer lending domain, the similarities in process between direct and indirect lending, prime and specialty / sub-prime, from simple products like credit card to more complex ones like mortgage is difficult to ignore.  That being said, it isn’t unusual for systems to exist in a fairly siloed manner, from application to booking, from document preparation, into the servicing process itself.

What’s interesting, from my perspective, is where the differentiation actually exists across these product sets: in the rules and workflow being applied across them, while the underlying functions themselves are relatively the same.  As an example, one thing that differentiates a lender is their risk management policy, not necessarily the tool they use to assess to implement their underwriting rules or scoring models per se.  Similarly, whether pulling a credit score is part of the front end of the process in something like credit card and an intermediate step in education lending, having a configurable workflow engine could enable origination across a diverse product set with essentially the same back-end capabilities and likely at a lower operating cost.

So why does it matter?  Well, to the degree that the focus shifts from developing core components that implement relatively commoditized capability to the rules and processes that enable various products to be delivered to end consumers, the speed with which products can be developed, enhanced, modified, and deployed should be significantly improved.

Ok, Sounds Great, But Now What?

It Starts with Culture

At the end of the day, even the best designed solutions come down to culture.  As I mentioned above, excellence takes discipline and, at times, patience and thoughtfulness that seems to contradict the speed with which we want to operate from a technology (and business) standpoint.  That being said, given the challenges that ultimately arise when you operate without the right standards, discipline, and governance, the outcome is well worth the associated investments.  This is why I placed courageous leadership as the first pillar in the five dimensions outlined in my article on Excellence by DesignLeadership is critical and, without it, everything else becomes much more difficult to accomplish.

Exploring the Right Operating Model

Once a strategy is established to define the desired future state and a culture to promote change and evolution is in place, looking at how to organize around managing that change is worth consideration.  I don’t necessarily believe in “all in” operating approaches, whether it is a plan/build/run, product-based orientation, or some other relatively established model.  I do believe that, given leadership and adaptability are critically needed for transformational change, looking at how the organization is aligned to maintaining and operating the legacy environment versus enabling establishment and transition to the future environment is something to explore.  As an example, rather than assuming a pure product-based orientation, which could mushroom into a bloated organization design where not all leaders are well suited to manage change effectively, I’d consider organizing around a defined set of “transformation teams” that operate in a product-oriented/iterative model, but basically take on the scope of pieces of the technology environment, re-orient, optimize, modernize, and align them to the future operating model, then transition those working assets to different leaders that maintain or manage those solutions in the interest of moving to the next set of transformation targets.  This should be done in concert with looking for ways to establish “common components” teams (where infrastructure like cloud platform enablement can be a component as well) that are driven to produce core, reusable services or assets that can be consumed in the interest of ultimately accelerating delivery and enabling wider adoption of the future operating model for IT.

Managing Transition

One of the consistent challenges with any kind of transformative change is moving between what is likely a very diverse, heterogenous environment to one that is standards-based, governed, and relatively optimized.  While it’s tempting to take on too much scope and ultimately undermine the aspirations of change, I believe there is a balance to be struck in defining and establishing some core delivery capabilities that are part of the future infrastructure, but incrementally migrating individual capabilities into that future environment over time.  This is another case where disciplined operations and disciplined delivery come into play so that changes are delivered consistently but also in a way that is sustainable and consistent with the desired future state.

Wrapping Up

While a certain level of evolution is guaranteed as part of working in technology, the primary question is whether we will define and shape that future or be continually reacting and responding to it.  My belief is that we can, through a level of thoughtful planning and strategy, influence and shape the future environment to be one that enables rapid evolution as well as accelerated integration of best-of-breed capabilities at a pace and scale that is difficult to deliver today.  Whether we’ll truly move to a full producer/consumer type environment that is service based, standardized, governed, orchestrated, fully secured, and optimized is unlikely, but falling short of excellence as an aspiration would still leave us in a considerably better place than where we are today… and it’s a journey worth making in my opinion.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/08/2024

Transforming Manufacturing

Overview

Growing Up in Manufacturing

My father ran his own business when I was growing up.  His business had two components: first, he manufactured pinion wire (steel or brass rods of various diameters with teeth from which gears are cut) and, second, as a producer of specialty gears that were used in various applications (e.g., the timing mechanism of an oil pump).  It was something he used to raise and support a large Italian family and it was pretty much a one-man show, with help from his kids as needed, whether that was counting and quality testing gears with mating parts or cutting, packing, and shipping material to various customers across North America.  He acquired and learned how to operate screw machines to produce pinion wire but eventually shifted to a distribution business, where he would buy finished material in quantity and then distribute to middle market customers at lower volumes at a markup. 

His business was as low tech as you could get, with a little card file he maintained that had every order by customer written out on an index card, tracking the specific item/part, volume, and pricing so he had a way to understand history as new requests for quotes and orders came in and also understand purchase patterns over time.  It was largely a relationship business, and he took his customer commitments to heart at a level that dinner conversation could easily deflect into a worry about a disruption in his supply chain (e.g., something not making it to the electroplater on time) and whether he might miss his promised delivery date.  Integrity and accountability were things that mattered and it was very clear his customers knew it.  He had a note pad on which he’d jot things down to keep some subset of information on customer / prospect follow ups, active orders, pending quotes, and so on, but to say there was a system outside what he kept in his head would be unfair to his mental capacity, which was substantial.

It was a highly manual business and, as a person who taught myself how to write software in the third grade, I was always curious what he could do to make things a little easier, more structured, and less manual, even though that was an inescapable part of running his business on the whole.  That isn’t to say he had that issue or concern, given he’d developed a system and way of operating over many years, knew exactly how it worked, and was entirely comfortable with it.  There was also a small point of my father being relatively stubborn, but that didn’t necessarily deter me from suggesting things could be improved.  I do like a challenge, after all… 

As it happened, when I was in high school, we got our first home computer (somewhere in the mid-1980s) and, despite its relatively limited capacity, I thought it would be a good idea to figure out a way to make something about running his business a little easier with technology.  To that end, I wrote a piece of software that would take all of his order management off of the paper index cards and put it into an application.  The ability to look up customer history, enter new orders, look at pricing differences across customers, etc. were all things I figured would make life a lot easier, not mention reducing the need to maintain this overstuffed card file that seemed highly inefficient to me.

By this point, I suspect it’s clear to the reader what happened… which is that, while my father appreciated the good intentions and concept, there was no interest in changing the way he’d been doing business for decades in favor of using technology he found a lot more confusing and intimidating than what he already knew (and that worked to his level of satisfaction).  I ended up relatively disappointed, but learned a valuable lesson, which is that, the first challenge in transformation is changing mindsets… without that, the best vision in the world will fail, no matter what value it may create.

It Starts with Mindset

I wanted to start this article with the above story because, despite nearly forty years having passed since I tried to introduce a little bit of automation to my father’s business, manufacturing in today’s environment can be just as antiquated and resistant to change as it was then, seeing technology as an afterthought, a bolt on, or cost of doing business rather than the means to unlock the potential that exists to transform a digital business in even the most “low tech” of operating environments.

While there is an inevitable and essential dependence on people, equipment, and processes, my belief is that we have a long way to go on understanding the critical role technology plays in unlocking the potential of all of those things to optimize capacity, improve quality, ensure safety, and increase performance in a production setting.

The Criticality of Discipline

Having spent a number of years understanding various approaches to digital manufacturing, one point that I wanted to raise prior to going into more of the particulars is the importance of operating with a holistic vision and striking the balance between agility and long-term value creation.  As I addressed in my article Fast and Cheap Isn’t Good, too much speed without quality can lead to complexity, uncontrolled and inflated TCO, and an inability to integrate and scale digital capabilities over time.  Wanting something “right now” isn’t an excuse not to do things the right way and eventually there is a price to pay for tactical thinking when solutions don’t scale or produce more than incremental gains. 

This is also related to “Framework-Driven Design” that I talk about in my article on Excellence by Design.  It is rarely the case that there is an opportunity to start from scratch in modernizing a manufacturing facility, but I do believe there is substantial value in making sure that investments are guided by an overall operating concept, technology strategy, and evolving standards that will, over time, transform the manufacturing environment as a whole and unlock a level of value that isn’t possible where incremental gains are always the goal.  Sustainable change takes time.

The remainder of this article will focus on a set of areas that I believe form the core of the future digital manufacturing environment.  Given this is a substantial topic, I will focus on the breadth of the subject versus going too deep into any one area.  Those can be follow-up articles as appropriate over time.

 

Leveraging Data Effectively

The Criticality of Standards

It is a foregone conclusion that you can’t optimize what you can’t track, measure, and analyze in real-time.  To that end, starting with data and standards is critical in transforming to a digital manufacturing environment.  Without standards, the ability to benchmark, correlate, and analyze performance will be severely compromised.  This can be a basic as how a camera system, autonomous vehicle, drone, conveyor, or digital sensor is integrated within a facility, to the representation of equipment hierarchies, or how operator roles and processes are tracked across a set of similar facilities.  Where standards for these things don’t exist, value will be constrained to a set of individual point solutions, use cases, and one-off successes.  Where standards are, however, implemented and scaled over time, the value opportunity will eventually cross over into exponential gains that aren’t otherwise possible, because the technical debt associated with retrofitting and mapping across various standards in place will create a significant maintenance effort that limits focus on true innovation and optimization.  This isn’t to suggest that there is a one-size-fits-all way to thinking about standards and that every solution needs to conform for the sake of an ivory tower ideal.  The point is that it’s worth slowing down the pace of “progress” at times to understand the value in designing solutions for longer-term value creation.

The Role of Data Governance

It’s impossible to discuss the criticality of standards without also highlighting the need for active, ongoing data governance, both to ensure standards are followed, that data quality at the local and enterprise level is given priority (especially to the degree that analytical insights and AI become core to informed decision making), and also to help identify and surface additional areas of opportunity where standards may be needed to create further insights and value across the operating environment.  The upshot of this is that there need to be established roles and accountability for data stewards at the facility and enterprise level if there is an aspiration to drive excellence in manufacturing, no matter what the present level of automation is across facilities.

 

Modeling Distributed Operations

Applying Distributed Computing

There is a power in distributed computing that enables you to scale execution at a rate that is beyond the capacity you can achieve with a single machine (or processor).  The model requires an overall coordinator of activity to distribute work and monitor execution and then the individual processors to churn out calculations as rapidly as they are able.  As you increase processors, you increase capacity, so long as the orchestrator can continue to manage and coordinate the parallel activity effectively.

From a manufacturing standpoint, the concept applies well across a set of distributed facilities, where the overall goal is to optimize the performance and utilization of available capacity given varying demand signals, individual operating characteristics of each facility, cost considerations, preventative maintenance windows, etc.  It’s a system that can be measured, analyzed, and optimized, with data gathered and measured locally, a subset of which is used to inform and guide the macro-level process.

Striking the Balance

While I will dive into this a little further towards the tail end of this article, the overall premise from an operating standpoint is to have a model that optimizes the coordination of activity between individual operating units (facilities) that are running as autonomously as possible at peak efficiency, while distributing work across them in a way that maximizes production, availability, cost, or whatever other business parameters are most critical. 

The key point being that the technology infrastructure for distributing and enabling production across and within facilities should ideally be a matter of business parameters that can be input and adjusted at the macro-level and the entire system of facilities be adjusted in real-time in a seamless, integrated way.  Conversely, the system should be a closed loop where a disruption at the facility level can inform a change across the overall ecosystem such that workloads are redistributed (if possible) to minimize the impact on overall production.  This could be manifest in one or more micro-level events (e.g., a higher than expected occurrence of unplanned outages) that informs production scheduling and distribution of orders to a major event (e.g., a fire or substantial facility outage) that redirects work across other facilities to minimize end customer impact.  Arguably there are elements that exist within ERP systems that can account for some of this today, but the level and degree of customization required to make it a robust and inclusive process would be substantial, given much of the data required to inform the model exists outside the ERP ecosystem itself, in equipment, devices, processes, and execution within individual facilities themselves.

Thinking about Mergers, Acquisitions, and Divestitures

As I mentioned in the previous section on data, establishing standards is critical to enabling a distributed paradigm for operations, the benefit of which is also the speed at which an acquisition could be leveraged effectively in concert with an existing set of facilities.  This assumes there is an ability to translate and integrate systems rapidly to make the new facility function as a logical extension of what is already in place, but ultimately a number of those technology-related challenges would have to be worked through in the interest of optimizing individual facility performance regardless.  The alternative to having this macro-level dynamic ecosystem functioning would likely be excess cost, inefficiency, and wasted production capacity.

 

Advancing the Digital Facility

The Role of the Digital Facility

At a time when data and analytics can inform meaningful action in real-time, the starting point for optimizing performance is the individual “processor”, which is a digital facility.  While the historical mental model would focus on IT and OT systems and integrating them in a secure way, the emergence of digital equipment, sensors, devices, and connected workers has led to more complex infrastructure and an exponential amount of available data that needs to be thoughtfully integrated to maximize the value it can contribute over time.  With this increased reliance on technology, likely some of which runs locally and some in the cloud, the reliability of wired and wireless connectivity has also become a critical imperative of operating and competing as a digital manufacturer.

Thinking About Auto Maintenance

Drawing on a consumer example, I brought my car in for maintenance recently.  The first thing the dealer did was plug in and download a set of diagnostic information that was gathered over the course of my road trips over the last year and a half.  The data was collected passively, provided the technicians with input on how various engine components were performing, and also some insight on settings that I could adjust given my driving habits that would enable the car to perform better (e.g., be more fuel efficient).  These diagnostics and safety systems are part of having a modern car and we take them for granted.

Turning back to a manufacturing facility, a similar mental model should apply for managing data at a local and enterprise level, which is that there should be a passive flow of data to a central repository that is mapped to processes, equipment, and operators in a way that enables ongoing analytics to help troubleshoot problems, identify optimization and maintenance opportunities, and look across facilities for efficiencies that could be leveraged at broader scale.

Building Smarter Equipment

Taking things a step further… what if I were to attach a sensor under the hood of my car, take the data, build a model, and try to make driving decisions using that model and my existing dashboard as input?  The concept seems a little ridiculous given the systems already in place within a car to help make the driving experience safe and efficient.  That being said, in a manufacturing facility with legacy equipment, that intelligence isn’t always built in, and the role of analytics can become an informed guessing game of how a piece of equipment is functioning without the benefit of the knowledge of the people who built the equipment to begin with. 

Ultimately, the goal should be for the intelligence to be embedded within the equipment itself, to enable a level of self-healing or alerting, and then within control systems to look at operating conditions across a connected ecosystem to determine appropriate interventions as they occur, whether that be a minor adjustment to operating parameters or a level of preventative maintenance.

The Role of Edge Computing and Facility Data

The desire to optimize performance and safety at the individual facility level means that decisions need to be informed and actions taken in near real-time as much as possible.  This premise then suggests that facility data management and edge computing will continue to increase in criticality as more advanced uses of AI become part of everyday integrated work processes and facility operations.

 

Enabling Operators with Intelligence

The Knowledge Challenge

With the general labor shortage in the market and retirement of experienced, skilled laborers, managing knowledge and accelerating productivity is a major issue to be addressed in manufacturing facilities.  There are a number of challenges associated with this situation, not the least of which can be safety related depending on the nature of the manufacturing environment itself.  Beyond that, the longer it takes to make an operator productive in relation to their average tenure (something that statistics would suggest is continually shrinking over time), the effectiveness of the average worker can become a limiting factor in the operating performance of a facility overall.

Understanding Operator Overload

One way that things have gotten worse is the proliferation of systems that comes with “modernizing” the manufacturing environment itself.  When confronted with the ever-expanding set of control, IT, ERP, and analytical systems, all of which can be sending alerts and requesting action (to varying degrees of criticality) on a relatively continuous basis, the pressure being created on individual operators and supervisors in a facility has increased substantially (with the availability of exponential amounts of data itself).  This is further complicated in situations where an individual “wears multiple hats” in terms of fulfilling multiple roles/personas within a given facility and arbitrating which actions to take against that increased number of demands can be considerably more complex.

Why Digital Experience Matters

While the number of applications that are part of an operating environment may not be something that is easy to reduce or simplify without significant investment (and time to make change happen), it is possible to look at things like digital experience platforms (DXPs) as a means to manage multiple applications into a single, integrated experience, inclusive of AR/VR/XR technologies as appropriate.  Organizing around an individual operator’s responsibilities can help reduce confusion, eliminate duplicated data entry, improve data quality, and ultimately improve productivity, safety, and effectiveness by extension.

The Role of the Intelligent Agent

With a foundation in place to organize and present relevant information and actions to an operator on a real-time basis, the next level of opportunity comes with the integration of intelligent agents (AI-enabled tools) into a digital worker platform to inform meaningful and guided actions that will ultimate create the most production and safety impact on an ongoing basis.  Again, there is a significant dependency on edge computing, wireless infrastructure, facility data, mobile devices, a delivery mechanism (the DXP mentioned above), and a sound underlying technology strategy to enable this at scale, but it is ultimately where AI tools can have a major impact in manufacturing moving forward.

 

Optimizing Performance through Orchestration

Why Orchestration Matters

Orchestration itself isn’t a new concept in manufacturing from my perspective, as the legacy versions of it are likely inherent in control and MES systems themselves.  The challenge occurs when you want to scale that concept out to include digital equipment, digital workers, digital devices, control systems, and connected applications into one, seamless, integrated end-to-end process.  Orchestration provides the means to establish configurable and dynamic workflow and associated rules into how you operate and optimize performance within and across facilities in a digital enterprise.

While this is definitely a capability that would need to be developed and extended over time, the concept is to think of the manufacturing ecosystem as a seamless collaboration of operators and equipment to ultimately drive efficient and safe production of finished goods. Having established the infrastructure to coordinate and track activity, the process performance can be automatically recorded and analyzed to inform continuous improvement on an ongoing basis.

Orchestrating within the Facility

The number of uses of orchestration within a facility can be as simple as coordinating and optimizing material movement between autonomous vehicles and fork lifts within a facility, to computer vision applications for safety and quality management.  With the increasing number of connected solutions within a facility, having the means to integrate and coordinate activity between and across them offers a significant opportunity in digital manufacturing moving forward.

Orchestrating across the Enterprise

Scaling back out to the enterprise level, looking across facilities, there are opportunities to look at things like procurement of MRO supplies and optimizing inventory levels, managing and optimizing production planning across similar facilities, benchmarking and analyzing process performance and looking for improvements that can be applied across facilities in a way that will create substantially greater impact than is possible if the focus is limited to the individual facility alone.  Given that certain enterprise systems like ERPs tend to operate at largely a global versus local level, having infrastructure in place to coordinate activity across both can create visibility to improvement opportunities and thereby substantial value over time.

Coordinated Execution

Finally, to coordinate between the local and global levels of execution, a thoughtful approach to managing data and the associated analytics needs to be taken.  As was mentioned in the opening, the overall operating model is meant to leverage a configurable, distributed paradigm, so the data that is shared and analyzed within and across layers is important to calibrate as part of the evolving operating and technology strategy.

 

Wrapping Up

There is a considerable amount of complexity associated with moving between a legacy, process and equipment-oriented mindset to one that is digitally enabled and based on insight-driven, orchestrated action.  That being said, the good news is that the value that can be unlocked with a thoughtful digital strategy is substantial given we’re still on the front-end of the evolution overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 02/12/2024