Developing Application Strategy – Managing the Intangibles

It ought to be easier (and cheaper) to run a business than this…

Complexity and higher than desirable operating cost are prevalent in most medium- to large-scale organizations.  With that, generally some interest follows in exploring ways to reduce and simplify the technology footprint, to reduce ongoing expenses, mitigate risk and limit security exposure, and free up capital either to reinvest in more differentiated and value-added activity, or to contribute directly to the bottom line overall.

The challenge is really in trying to find the right approach to simplification that is analytically sound while providing insight at speed so you can get to the work and not spend more time “analyzing” than is required at any step of the process.

In starting to outline the content for this article, aside from identifying the steps in a rationalization process and working through a practical example to illustrate some scenarios that can occur, I also started noting some other, more intangible aspects to the work that have come up in my experience.  When that list reached ten different dimensions, I realized that I needed to split what was intended to be a single article on this topic into two parts: one that addresses the process aspects of simplification and one that addresses the more intangible and organizational/change management-oriented dimensions.  This piece is focused on the intangibles, because the environment in which you operate is critical to setting the stage for the work and ultimate results you achieve.

The Remainder of this Article…

The dimensions that came to mind fell into three broader categories, into which they are organized below, they are:

  • Leading and Managing Change
  • Guiding the Process and Setting Goals
  • Planning and Governance

For each dimension, I’ll try to provide some perspective on why it matters and some potential ideas to consider in the interest of addressing them in the context of a simplification effort overall.

Leading and Managing Change

At its core, simplification is a change management and transformational activity and needs to be approached as such.  It is as much about managing the intangibles and maintaining healthy relationships as anything to do with the process you follow or opportunities you surface.  Certainly, the structural aspects and the methodology matter, but without giving attention to the items below, likely you will have either some very rough sledding in execution, suboptimize your outcomes, or fail altogether.  Said differently: the steps you follow are only part of the work, improving your operating environment is critically important.

Leadership and Culture Matter

Like anything else that corresponds to establishing excellence in technology, courageous leadership and an enabling culture are fundamental to a simplification activity.  The entire premise associated with this work rests in change and, wherever change is required, there will be friction and resistance, and potentially significant resistance at that.

Some things to consider:

  • Putting the purpose and objective of the change front and center and reinforcing it often (likely reducing operating expense in the interest of improving profitability or freeing up capital for discretionary spending)
  • Working with a win-win mindset, looking for mutual advantage, building partnerships, listening with empathy, and seeking to enroll as many people in the cause as possible over time
  • Being laser-focused on impact, not solely on “delivery”, as the outcomes of the effort matter
  • Remaining resilient, humble (to the extent that there will be learnings along the way), and adaptable, working with key stakeholders to find the right balance between speed and value

It’s Not About the Process, It’s About Your Relationships

Much like portfolio management, it is easy to become overly focused on the process and data with simplification work and lose sight of the criticality of maintaining a healthy business/technology partnership.  If IT has historically operated in an order taker mode, suggesting potentially significant changes to core business applications that involve training large numbers of end users (and the associated productivity losses and operating disruptions that come with that) may go nowhere, regardless of how analytically sound your process is.

Some things to consider:

  • Know and engage your customer. Different teams have different needs, strategies, priorities, risk tolerance, and so on
  • You can gather data and analyze your environment (to a degree) independent of your business partners, but they need to be equally invested in the vision and plan for it to be successful
  • Establishing a cadence, individually and collectively, with key stakeholders, aligned to the pace of the work, minimally to maintain a healthy, transparent, and open dialogue on objectives, opportunities, risks, and required inventions and support, is important

Be a Historian as Much as You are an Auditor

Back to the point above on improving the operating environment being as important as your process/ methodology, it is important to recognize something up front in the simplification process: you need to understand how you got where you are as part of the exercise, or you may end right back there as you try to make things “better”.  It could be that complexity is a result of a sequence of acquisitions, a set of decentralized decisions without effective oversight or governance, functional or capability gaps in enterprise solutions being addressed at a “local” level, underlying culture or delivery issues, etc.  Knowing the root causes matters.

As an example, I once saw a situation where two teams implemented different versions of the same application (in different configurations) purely because the technology leaders didn’t want to work with each other.  The same application could’ve supported both organizations, but the decisions were made without enterprise-level governance, the operating complexity and TCO increased, and the subsequent cost to consolidate into a single instance was deemed “lower priority” than continuing work.  While this is a very specific example, the point is that understanding how complexity is created can be very important in pivoting to a more streamlined environment.

Some things to consider:

  • As part of the inventory activity, look beyond pure data collection to having an opportunity to understand how the various portfolios of applications came about over time, the decisions that led to the complexity that exists, the pain points, and what is viewed as working well (and why)
  • Use the insights obtained to establish a set of criteria to consider in the formation of the vision and roadmap for the future so you have a sense whether the changes you’re making will be sustainable. These considerations could also help identify risks that could surface during implementation that could reintroduce the kind of complexity in place today

What Defines “Success”

Normally, a simplification strategy is based on a snapshot of a point in time, with an associated reduction in overall cost (or shift in overall spend distribution) and/assets (applications, data solutions, etc.).  This is generally a good way to establish the case for change and desired outcome of the activity itself, but it doesn’t necessarily cover what is “different” about the future state beyond a couple core metrics.  I would argue that it is also important to consider what I mentioned in the previous point, which is how the organization developed a complex footprint to begin with.

As an example, if complexity was caused by a rapid series of acquisitions, even if I do a good job of reducing or simplifying the footprint in place, if I continue to acquire new assets, I will end up right back where I was, with a higher operating cost than I’d like.  In this case, part of your objective could be to have a more effective process for integrating acquisitions.

Some things to consider:

  • Beyond the financial and operating targets, identify any necessary process or organizational changes needed to facilitate sustainability of the environment overall
  • This could involve something as simple as reviewing enterprise-level governance processes, or more structural changes in how the underlying technology footprint is managed

Guiding the Process and Setting Goals

A Small Amount of Good Data is Considerably Better than a Lot of Bad

As with any business situation, it’s tempting to assume that having more data is automatically a good thing.  In the case of maintaining an asset inventory, the larger and more diverse an organization is, the more difficult it is to maintain the data with any accuracy.  To that end, I’m a very strong believer in maintaining as little information as possible, doing deep dives into detail only as required to support design-level work.

As an example, we could start the process by identifying functional redundancies (at a category/component level) and spend allocations within and across portfolios as a means to surface overall savings opportunity and target areas for further analysis.  That requires a critical, minimum set of data, but at a reasonable level of administrative overhead.  Once specific target areas are identified and prioritized, further data gathering in the interest of comparing different solutions, performing gap analyses, and identifying candidate future state solutions can be done as a separate process.  This approach is prioritizing going broad (to Define opportunities) versus going deep (to Design the solution), and I would argue it is a much more effective and efficient way to go about simplification, especially if the underlying footprint has any level of volatility where the more detailed information will become outdated relatively quickly.

Some things to consider:

  • Prioritize a critical, minimum set of data (primary functions served by an application, associated TCO, level of criticality, businesses/operating units supported, etc.) to understand spend allocation in relation to the operating and technology footprint
  • Deep dive into more particulars (functional differences across similar systems within a given category) as part of a specific design activity downstream of opportunity identification

Be Greedy, But Realistic

The simplification process is generally going to be iterative in nature, insofar as there may be a conceptual target for complexity and spend reduction/reallocation at the outset, some analysis is performed, the data provides insight on what is possible, the targets are adjusted, further analysis or implementation is performed, the picture is further refined, and so on.

In general, my experience is that there are always going to be issues in what you can practically pursue, and therefore, it is a good idea to overshoot your targets.  By this, I mean that we should strive to identify more than our original savings goals because if we limit the level of opportunities we identify to a preconceived goal or target, we may either suboptimize the business outcome if things go well, or fall short of expectations in the event we are able to pursue only a subset of what is originally identified for various business, technology, or implementation-related issues.

Some things to consider:

  • Review opportunities, asking what would be different if you could only pursue smaller, incremental efforts, had a target that was twice what you’ve identified, could start from scratch and completely redefine your footprint with an “optimal case” in mind… and consider what, if anything would change about your scope and approach

Planning and Governance

Approach Matters

Part of the challenge with simplification is knowing where to begin.  Do you cover all of the footprint, the fringe (lower priority assets), the higher cost/core systems?  The larger an organization is, the more important it is to target the right opportunities quickly in your approach and not try to boil the ocean.  That generally doesn’t work.

I would argue that the primary question to understand in terms of targeting a starting point is where you are overall from a business standpoint.  The first iteration of any new process tends to generate learnings and improvements, so there will be more disruption than expected the first time you execute the process end-to-end.  To that point, if there is a significant amount of business risk to making widespread, foundational changes, it may make sense to start on lower risk, clean up type activities on non-core/supporting applications (e.g., Treasury, Tax, EH&S, etc.) by comparison with core solutions (like an ERP, MES, Underwriting, Policy Admin, etc.).  On the other hand, if simplification is meant to help streamline core processes, enable speed-to-market and competitive advantage, or some form of business growth, it could be that focusing on core platforms first is the right approach to take.

The point is that the approach should not be developed independent of the overall business environment and strategy, they need to align with each other.

Some things to consider:

  • As part of the application portfolio analysis, understand the business criticality of each application, level of planned changes and enhancements, how those enable upcoming strategic business goals, etc.
  • Consider how the roadmap will enable business outcomes over time; whether that is ideally a slow, build process of incremental gains, or more of a big bets, high impact changes that materially affect business value and IT spend

 

Accuracy is More Important Than Precision

This point may seem to contradict what I wrote earlier in terms of having a smaller amount of good data, but the point here is that it’s important to acknowledge in a transformation effort that there is a directly proportional relationship between the degree of change involved in the effort and the associated level of uncertainty in the eventual outcome.  Said differently: the more you change, the less you can predict the result with any precision.

This is true because there is a limited level of data generally available in terms of the operating impact of changes to people, process, and technology.  Consequently, the more you change in terms of one or more of those elements, your ability to predict the exact outcome from a metrics standpoint (beyond a more anecdotal/conceptual level) will be limited.  In line with the concepts that I shared in the recent “Intelligent Enterprise 2.0” series, with orchestration and AI, I believe we can gather, analyze, and leverage a greater base of this kind of data, but the infrastructure to do this largely doesn’t exist in most organizations I’ve seen today.

Some things to consider:

  • Be mindful not to “overanalyze” the impact of process changes up front in the simplification effort. The business case will generally be based on the overall reduction in assets/ complexity, changes in TCO, and shifts (or reductions) in staffing levels from the current state
  • It is very difficult to predict the end state when a large number of applications are transitioned as part of a simplification program, so allow for a degree of contingency in the planning process (in schedule and finances) rather than spending time. Some things that may not appear critical generally will reveal themselves to be only in implementation, some applications that you believe you can decommission will remain for a host of reasons, and so on.  The best laid plans on paper rarely prove out exactly in the course of execution depending on the complexity of the operating environment and culture in place

Expect Resistance and Expect a Mess

Any large program in my experience tends to go through an “optimism” phase, where you identify a vision and fairly significant, transformative goal, the business case and plan looks good, it’s been vetted and stakeholders are aligned, and you have all the normal “launch” related events that generate enthusiasm and momentum towards the future… and then reality sets in, and the optimism phase ends.

Having written more than once on Transformation, the reality is that it is messy and challenging, for a multitude of reasons, starting with patience, adaptability, and tenacity it takes to really facilitate change at a systemic level.  Status quo feels safe and comforting, it is known, and upsetting that reality will necessarily lead to friction, resistance, and obstacles throughout the process.

Some things to consider:

  • Set realistic goals for the program at the outset, acknowledge that it is a journey, that sustainable change takes time, the approach will evolve as you deliver and learn, and that engagement, communication, and commitment are the non-negotiables you need throughout to help inform the right decisions at the right time to promote success
  • Plan with the 30-, 60-, and 90-day goals in mind, but acknowledge that any roadmap beyond the immediate execution window will be informed by delivery and likely evolve over time. I’ve seen quite a lot of time wasted on detailed planning more than one year out where a goal-based plan with conceptual milestones would’ve provided equal value from a planning and CBA standpoint

Govern Efficiently and Adjust Responsively

Given the scale and complexity of simplification efforts, it would be relatively easy to “over-report” on a program of this type and cause adverse impact on the work itself.  In line with previous articles that I’ve written on governance and transparency, my point of view is that the focus needs to be on enabling delivery and effective risk management, not administrative overhead.

Some things to consider:

  • Establish a cadence for governance early on to review ongoing delivery, identify interventions and support needed, learnings that can inform future planning, and adjust goals as needed
  • Large programs succeed or fail in my experience based on maintaining good transparency to where you are, identifying course corrections when needed, and making those adjustments quickly to minimize the cost of “the turns” when they inevitably happen. Momentum is so critical in transformation efforts that minimizing these impacts is really important to keeping things on track

Wrapping Up

Overall, the reason for separating the process from the change in addressing simplification was deliberate, because both aspects matter.  You can have a thoughtful, well executed process and accomplish nothing in terms of change and you can equally be very mindful of the environment and changes you want to bring about, but the execution model needs to be solid, or you will lose any momentum and good will you’ve built in support of your effort.

Ultimately, recognizing that you’re both engaging in a change and a delivery activity is the critical takeaway.  Most medium- to large-scale environments end up complex for a host of reasons.  You can change the footprint, but you need to change the environment as well, or it’s only a matter of time before you’ll find yourself right back where you started, perhaps with a different set of assets, but a lot of same problems you had in the first place.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/22/2025

The Intelligent Enterprise 2.0 – IT Organizational Implications

“Go West, Young Man”

I remember a situation where an organization decided to aggressively offshore work to reduce costs.  The direction to IT leaders was no different than that of the Gold Rush above.  Leaders were given a mandate, a partner with whom to work, and a massive amount of contracting ensued.  The result?  A significant number of very small staff augmentation agreements (e.g., 1-3 FTEs), a reduction in fixed, but a disproportionate increase in variable operating expenses, and a governance and administrative nightmare.  How did it happen?  Well, there was leadership support, a vision, and a desired benefit, but no commonly understood approach, plan, or governance.  The organization then spent a considerable amount of time transitioning all of the agreements in place to something more deliberate, establishing governance, and optimizing what quickly became a very expensive endeavor.

The requirements of transformation are no different today than they ever have been.  You need a vision, but also the conditions to promote success, and that includes an enabling culture, a clear approach, and governance to keep things on track and make adjustments where needed.

This is the final post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Overall Approach

Writing about IT operating models can be very complex for various reasons, and mostly because there are many ways to structure IT work based on the size and complexity of an organization, and there is nothing wrong with that in principle.  A small- to medium-size IT organization, as an example, likely has little separation of concerns and hierarchy, people playing multiple roles, a manageable project portfolio, and minimal coordination required in delivery.  Standardization is not as complex and requires considerably less “design” than a multi-national or large-scale organization, where standardization and reuse needs to be weighed against the cost of coordination and management involved as the footprint scales.  There can also be other considerations, such as one I experienced where simplification of a global technology footprint was driven off of operating similarities across countries rather than geographic or regional considerations.

What tends to be common across ways of structuring and organizing is the various IT functions that exist (e.g., portfolio management, enterprise architecture, app development, data & analytics, infrastructure, IT operations), just at different scales and levels of complexity.  These can be capability-based, organized by business partner, with some centralized capabilities and some federated, etc. but the same essential functions likely are in place, at varying levels of maturity and scale based on the needs of the overall organization.  In multi-national or multi-entity organizations/conglomerates, these functions likely will be replicated across multiple IT organizations with some or no shared capability existing at the parent/holding company or global level.

To that end, I am going to explore how I think about integrating the future state concepts described in the earlier articles of this series in terms of an approach and conceptual framework that, hopefully, can be applied to a broad range of IT organizations, regardless of their actual structure.

The critical challenge with moving from our current environment to one where AI, apps, and data are synthesized and integrated is doing it in a way that follows a consistent approach and architecture while not centralizing so much that we either constrain progress or limit innovation that can be obtained by spreading the work across a broader percentage of an organization (business teams included).  This is consistent with the templatized approach discussed in the prior article on Transition, but there can be many ways that that effort is planned and executed based on the scale and complexity of the organization undertaking the transformation itself.

 

Key Considerations

Define the Opportunities, Establish the Framework, Define the Standards

Before being concerned with how to organize and execute, we first need to have a mental model for how teams will engage with an AI-enabled technology environment in the future.  Minimally, I believe that will manifest itself in four ways:

  • Those who define and govern the overall framework
  • Those who leverage AI-enabled capabilities to do their work
  • Those who help create the future environment
  • Those who facilitate transition to the future state operating model

I will explore the individual roles in the next section, but an important first step is defining the level of standardization and reuse that is desirable from an enterprise architecture standpoint.  That question becomes considerably more complex in organizations with multiple entities, particularly when there are significant differences in the markets they serve, products/services they provide, etc.  That doesn’t mean, however, that reuse and optimization opportunities don’t exist, but rather that they need to be more thoughtfully defined and developed so as not to slow any individual organization down in their ability to innovate and promote competitive advantage.  There may be opportunities to look for common capabilities that make more sense to develop centrally and reuse that can actually accelerate speed-to-market if built in a thoughtful manner.

Regardless of whether capabilities in a larger organization are designed and developed in a highly distributed manner, having a common approach to the overall architecture and standards (as discussed in the Framework article in this series) could be a way to facilitate learning and optimization in the future (within and across entities), which will be covered in the third portion of this article.

 

Clarify the Roles and Expectations, Educate Everyone

The table below is not meant to be exhaustive, but rather to act as a starting point to consider how different individuals and teams will engage with the future enterprise technology environment and highlight the opportunity to clarify those various roles and expectations in the interest of promoting efficiency and excellence.

While I’m not going to elaborate on the individual data points in the diagram itself, a few points to note in relation to the table.

There is a key assumption that AI-related tools will be vetted, approved, and governed across an enterprise (in the above case, by the Enterprise Architecture function).  This is to promote consistency and effectiveness, manage risk, and consider issues related to privacy, IP-protection, compliance, security, and other critical concerns that otherwise would be difficult to manage without some formal oversight.

It is assumed that “low hanging fruit” tools like Copilot, LLMs, and other AI tools will be used to improve productivity and look for efficiency gains while implementing a broader, modern and integrated future state technology footprint with integrated agents, insights, and experts.  The latter of these things has touchpoints across an organization, which is why having a defined framework, standards, and governance are so important in creating the most cost-effective and rapid path to transforming the environment to one that create disproportionate value and competitive advantage.

Finally, there are adjustments to be made in various operating aspects of running IT, which is to reinforce the idea that AI should not be a separate “thing” or a “silver bullet”, it needs to become an integrated part of the way an organization leverages and delivers technology capabilities.  To the extent it is treated as something different or special and separated from the ongoing portfolio of work and operational monitoring and management processes, it will eventually fail to integrate well, increase costs, and underdeliver on value contributions that are widely being chased after today.  Everyone across the organization should also be made aware of the above roles and expectations, along with how these new AI-related capabilities are being leveraged so they can help identify continuing opportunities to leverage and improve their adoption across the enterprise.

 

Govern, Coordinate, Collaborate, Optimize

With a model in place and organizational roles and responsibilities clarified, there needs to be a mechanism to collect learnings, facilitate improvements to the “templatized approach” referenced in the previous article in this series, and drive continuous improvement in how the organization functions and leverages these new capabilities.

This can be manifest in several approaches when spread across a medium to large organization, namely:

  • Teams can work in partnership or in parallel to try a new process or technology and develop learnings together
  • One team can take the lead to attempt a new approach or innovation and share learnings with other in a fast-follower based approach
  • Teams can try different approaches to the same type of solution (when there isn’t a clear best option), benchmark, and select the preferred approach based on the learning across efforts

The point is that, to achieve maximum efficiency, especially when there is scale, centralizing too much can hamper learning and innovation, and it is better to develop coordinated approaches that can be governed and leveraged than to have a “free for all” where the overall opportunity to innovate and capture efficiencies is compromised.

 

Summing Up

As I mentioned at the outset, the challenge in discussing IT implications from establishing a future enterprise environment with integrated AI is that there are so many possibilities for how companies can organize around it.  That being said, I do believe a framework for the future intelligent enterprise should be defined, roles across the organization should be clarified, and transition to that future state should be governed with an eye towards promoting value creation, efficiency, and learning.

This concludes the series on my point of view related to the future of enterprise technology with AI, applications, and data and analytics integrated into one, aligned strategy.  No doubt the concepts will evolve as we continue to learn and experiment with these capabilities, but I believe there is always room for strategy and a defined approach rather than an excess of ungoverned “experimentation”.  History has taught us that there are no silver bullets, and that is the case with AI as well.  We will obtain the maximum value from these new technologies when we are thoughtful and deliberate in how we integrate them with how we work and the assets and data we possess.  Treating them as something separate and distinct will only suboptimize the value we create over time and those who want to promote excellence will be well served to map out their strategy sooner rather than later.

I hope the ideas across this series were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/19/2025

The Intelligent Enterprise 2.0 – Managing Transition

A compelling vision will stand the test of time, but it will also evolve to meet the needs of the day.

There are inherent challenges with developing strategy for multiple reasons:

  • It is difficult to balance the long- and short-term goals of an organization
  • It generally takes significant investments to accomplish material outcomes
  • The larger the change, the more difficult it can be to align people to the goals
  • The time it takes to mobilize and execute can undermine the effort
  • The discipline needed to execute at scale requires a level of experience not always available
  • Seeking “perfection” can be counterproductive by comparison with aiming for “good enough”

The factors above, and other too numerous to list, should serve as reminders that excellence, speed, and agility require planning and discipline, they don’t happen by accident.  The focus of this article will be to break down a few aspects of transitioning to an integrated future state in the interest of increasing the probability of both achieving predictable outcomes, but also in getting there quickly and in a cost-effective way, with quality.  That’s a fairly tall order, but if we don’t shoot for excellence, we are eventually choosing obsolescence.

This is the sixth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Key Considerations

Create the Template for Delivery

My assumption is that we care about three things in transitioning to a future, integrated environment:

  • Rapid, predictable delivery of capabilities over time (speed-to-market, competitive advantage)
  • Optimized costs (minimal waste, value disproportionate to expense)
  • Seamless integration of new capabilities as they emerge (ease of use and optimized value)

What these three things imply is the need for a consistent, repeatable approach and a standard architecture towards which we migrate capabilities over time.  Articles 2-5 of this series explore various dimensions of that conceptual architecture from an enterprise standpoint, the purpose here will be to focus in on approach.

As was discussed in the article on the overall framework, the key is to start the design of the future environment around the various consumers of technology, the capabilities being provided to them, now and in the future, and prioritizing the relative value of those capabilities over time.  That should be done on an internal and external level, so that individual roadmaps can be developed by domain to eventually surface and provide those capabilities as part of a portfolio management process.  In the course of identifying those capabilities, some key questions will be whether those capabilities are available today, involve some level of AI services, whether those can/should be provided through a third-party or internally, and whether the data ultimately exists to enable them.  This is where a significant inflection point should occur: the delivery of those capabilities should follow a consistent approach and pattern, so it can be repeated, leveraged, and made more efficient over time.

For instance, if an internally-developed AI-enabled capability is needed, the way that the data is exposed, processed, the AI service (or data product) exposed, and integrated/consumed by a package or custom application should be exactly the same from a design standpoint, regardless of what the specific capability is.  That isn’t to say that the work needs to be done by only one team, as will be explored in the final article on IT organizational implications, but rather that we ideally want to determine the best “known path” to delivery, execute that repeatedly, and evolve it as required over time. 

Taking this approach should provide a consistent end-user experience of services as they are brought online over time, a relatively streamlined delivery process as you are essentially mass-producing capabilities over time, and a relatively cost-optimized environment as you are eliminating the bloat and waste of operating multiple delivery silos that will eventually impede speed-to-market at scale and lead to technical debt.

From an architecture standpoint, without wanting to go too deep into the mechanics here, to the extent that the current state and future state enterprise architecture models are different, it would be worth evaluating things like virtualization of data as well as adapters/facades in the integration layer as a way to translate between the current and future models so there is logical consistency in the solution architecture even where the underlying physical implementations are varied.  Our goal in enterprise architecture should always be to facilitate rapid execution, but promote standards, simplification, and reduce complexity and technical debt wherever possible over time.

 

Govern Standards and Quality

With a templatized delivery model and target architecture in place, the next key aspect to transition is to both govern the delivery of new capabilities to identify opportunities to develop and evolve standards, as well as to evolve the “template” itself, whether that involves adding automation to the delivery, building reusable frameworks or components that can be leveraged, or other assets that can help reduce friction and ease future efforts.

Once new capabilities are coming online, the other key aspect is to review them for quality and performance, to also look for ways to evolve the approach, adjust the architecture, and continue to refine the understanding of how these integrated capabilities can best be delivered and leveraged on an end-to-end basis.

Again, the overall premise of this entire series of articles is to chart a path towards an integrated enterprise environment for applications, data, and AI in the future.  To be repeatable, we need to be consistent in how we plan, execute, govern, and evolve in the delivery of capabilities over time.

Certainly, there will be learnings that come from delivery, especially early in the adoption and integration of these capabilities. The way that we establish and enable the governance and evolution stemming from what we learn is critical in making delivery more predictable and ensuring more useful capabilities and insights over time.

 

Evolve in a Thoughtful Manner

With our learnings firmly in hand, the mental model I believe would be worth considering is more of a scaled Agile-type approach, where we have “increment-level”/monthly scheduled retrospectives to review the learnings across multiple sprints/iterations (ideally across multiple product/domains) and to identify opportunities to adjust estimation metrics, design patterns, develop core/reusable services, and anything else that essentially would improve efficiency, cost, quality, or predictability.

Whether these occur monthly at the outset and eventually space out to a quarterly process as the environment and standards are better defined could depend on a number of factors, but the point is not to make it too reactive to any individual implementation and to try and look across a set of deliveries to look for consistent issues, patterns, and opportunities that will have a disproportionate impact on the environment as a whole.

The other key aspect to remember in relation to evolution is that the capabilities of AI in particular are evolving very rapidly at this point, which is also a reason for thinking about the overall architecture, separation of concerns, and standards for how you integrate in a very deliberate way.  A new version of a tool or technology shouldn’t require us to have to rewrite a significant portion of our footprint, it ideally should be a matter of upgrading the previous version and having the new capabilities available everywhere that service is consumed or in swapping out one technology for another, but everything on the other side of an API remaining nearly unchanged to the extent possible.  This is understood as a fairly “perfect world” description of what likely would happen in practice, depending on the nature of the underlying change, but the point is that, without allowing for these changes up front in the design, the level of disruption they cause will likely be amplified and slow overall progress.

 

Summing Up

Change is a constant in technology.  It’s a part of life given that things continue to evolve and improve, and that’s a good thing to the extent that it provides us with the ability to solve more complex problems, create more value, and respond more effectively to business needs as they evolve over time.  The challenge we face is in being disciplined enough to think through the approach so that we can become effective, fast, and repeatable.  Delivering individual projects isn’t the goal, it’s delivering capabilities rapidly and repeatably, at scale, with quality.  That’s an exercise in disciplined delivery.

Having now covered the technology and delivery-oriented aspects of the future state concept, the remaining article will focus on how to think about the organizational implications on IT.

 

Up Next: IT Organizational Implications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/15/2025

The Intelligent Enterprise 2.0 – Deconstructing Data-Centricity

Does having more data automatically make us more productive or effective?

When I wrote the original article on “The Intelligent Enterprise”, I noted that the overall ecosystem for analytics needed to change.  In many environments, data is moved from applications to secondary solutions, such as data lakes, marts, or warehouses, enriched and integrated with other data sets, to produce analytical outputs or dashboards to provide transparency into operating performance.  Much of this is reactive and ‘after-the-fact’ analysis of things we want to do right or optimize the first time, as events occur.  The extension of that thought process was to move those insights to the front of the process, integrate them with the work as it is performed, and create a set of “intelligent applications” that would drive efficiency and effectiveness to different levels than we’ve been able to accomplish before.  Does this eliminate the need for downstream analytics, dashboards, and reporting?  No, for many reasons, but the point is to think about how we can make the future data and analytics environment about establishing a model that enables insight-driven, orchestrated action.

This is the fifth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

 

Starting with the Consumer (1)

Just give me all the data” is a request that isn’t unusual in technology.  Whether that is a byproduct of the challenges associated with completing analytics projects, a lack of clear requirements, or something else, these situations cause an immediate issue in practice: what is the quality of the underlying data and what are we actually trying to do with it?

It’s tempting to start an analytics effort from the data storage and to work our way up the stack to the eventual consumer.  Arguably, this is a central premise in being “data-centric.” While I agree with the importance of data governance and management (the next topic), it doesn’t mean everything is relevant or useful to an end consumer, and too much data very likely just creates management overhead, technical complexity, and information overload.

A thoughtful approach needs to start with identifying the end consumers of the data, their relative priority, information and insights needs, and then developing a strategy to deliver those capabilities over time.  In a perfect world, that should leverage a common approach and delivery infrastructure so that it can be provided in iterations and elaborated to include broader data sets and capabilities across domains over time.  The data set should be on par with having an integrated data model and consistent way of delivering data products and analytics services that can be consumed by intelligent applications, agents, and solutions supporting the end consumer.

As an interesting parallel, it is worth noting that ChatGPT is looking to converge their reasoning and large language models from 4x into a single approach for their 5x release so that end customers don’t need to be concerned with having selected the “right” model for their inquiry.  It shouldn’t matter to the data consumer.  The engine should be smart enough to leverage the right capabilities based on the nature of the request, and that is what I am suggesting in this regard.

 

Data Management and Governance (2)

Without the right level of business ownership, the infrastructure for data and analytics doesn’t really matter, because the value to be obtained from optimizing the technology stack will be limited by the quality of the data itself.

Starting with master data, it is critical to identify and establish data governance and management for the critical, minimum amount of data in each domain (e.g., customer in sales, chart of accounts in finance), and the relationship between those entities in terms of an enterprise data model.

Governing data quality has a cost and requires time, depending on the level of tooling and infrastructure in place, and it is important to weigh the value of the expected outcomes in relation to the complexity of the operating environment overall (people, process, and technology combined).

 

From Content to Causation (3)

Finally, with the level of attention given to Generative AI and LLMs, it is important to note the value to be realized when we shift our focus from content to processes and transactions in the interest of understanding causation and influencing business outcomes.

In a manufacturing context, the increasing level of interplay between digital equipment, digital sensors, robotics, applications, and digital workers, there is a significant opportunity to orchestrate, gather and analyze increasing volumes of data, and ultimately optimize production capacity, avoid unplanned events, and increase the safety and efficacy of workers on the shop floor.  This requires deliberate and intentional design, with outcomes in mind.

The good news is that technologies are advancing in their ability to analyze large data sets and derive models to represent the characteristics and relationships across various actors in play and I believe we’ve only begun to scratch the surface on the potential for value creation in this regard.

 

Summing Up

Pulling back to the overall level, data is critical, but it’s not the endgame.  Designing the future enterprise technology environment is about exposing and delivering the services that enable insightful, orchestrated action on behalf of the consumers of that technology.  That environment will be a combination of applications, AI, and data and analytics, synthesized into one meaningful, seamless experience.  The question is how long it will take us to make that possible.  The sooner we begin the journey of designing that future state with agility, flexibility, and integration in mind, the better.

Having now elaborated the framework and each of the individual dimensions, the remaining two articles will focus on how to approach moving from the current to future state and how to think about the organizational implications on IT.

Up Next: Managing Transition

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 08/03/2025

The Intelligent Enterprise 2.0 – Evolving Applications

No one builds applications or uses new technology with the intention of making things worse… and yet we have and still do at times.

Why does this occur, time and again, with technology?  The latest thing was supposed to “disrupt” or “transform” everything.  I read something that suggested this was it.  The thing we needed to do, that a large percentage of companies were planning, that was going to represent $Y billions of spending two years from now, generating disproportionate efficiency, profitability, and so on.  Two years later (if that), there was something else being discussed, a considerable number of “learnings” from the previous exercise, but the focus was no longer the same… whether that was windows applications and client/server computing, the internet, enterprise middleware, CRM, Big Data, data lakes, SaaS, PaaS, microservices, mobile applications, converged infrastructure, public cloud… the list is quite long and I’m not sure that productivity and the value/cost equation for technology investments is any better in many cases.

The belief that technology can have such a major impact and the degree of continual change involved have always made the work challenging, inspiring, and fun.  That being said, the tendency to rush into the next advance without forming a thoughtful strategy or being deliberate about execution can be perilous in what it often leaves behind, which is generally frustration for the end users/consumers of those solutions and more unrealized benefits and technical debt for an organization.  We have to do better with AI, bringing intelligence into the way we work, not treating it as something separate entirely.  That’s when we will realize the full potential of the capabilities these technologies provide.  In the case of an application portfolio, this is about our evolution to a suite of intelligent applications that fit into the connected ecosystem framework I described earlier in the series.

This is the fourth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Before exploring various scenarios for how we will evolve the application landscape, it’s important to note a couple overall assumptions:

  • End user needs and understanding should come first, then the capabilities
  • Not every application needs to evolve. There should be a benefit to doing so
  • I believe the vast majority of product/platform providers will eventually provide AI capabilities
  • Providing application services doesn’t mean I have to have a “front-end” in the future
  • Governance is critical, especially to the extent that citizen development is encouraged
  • If we’re not mindful how many AI apps we deploy, we will cause confusion and productivity loss because of the fragmented experience

 

Purchased Software (1)

The diagram below highlights a few different scenarios for how I believe intelligence will find its way into applications.

In the case of purchased applications, between the market buzz and continuing desire for differentiation, it is extremely likely that a large share of purchased software products and platforms will have some level of “AI” included in the future, whether that is an AI/ML capability leveraging OLTP data that lives within its ecosystem, or something more causal and advanced in nature. 

I believe it is important to delineate between internally generated insights and ones coming as part of a package for several reasons. First, we may not always want to include proprietary data on purchased solutions, especially to the degree they are hosted in the public cloud and we don’t want to expose our internal data to that environment from a security, privacy, or compliance standpoint. Second, we may not want to expose the rules and IP associated with our decisioning and specific business processes to the solution provider. Third, to the degree we maintain these as separate things, we create flexibility to potentially migrate to a different platform more easily than if we are tightly woven into a specific package.  And, finally, the required data ingress to comingle a larger data set to expand the nature of what a package could provide “out of the box” may inflate operating costs of the platforms unnecessarily (this can definitely be the case with ERP platforms).

The overall assumption is that, rather than require custom enhancements of a base product, the goal from an architecture standpoint would be for the application to be able to consume and display information from an external AI service that is provided from your organization.  This is available today within multiple ERP platforms, as an example.

The graphic below shows two different migration paths towards a future state where applications have both package and internally provided AI capabilities, one where the package provider moves first, internal capabilities are developed in parallel as a sidecar application, and then eventually fully integrated into the platform as a service, and the other way around, assuming the internal capability is developed first, run in parallel, then folded into the platform solution.

Custom-Developed Software (2)

In terms of custom software, the challenge is, first, evaluating whether there is value in introducing additional capabilities for the end user and, second, understanding the implications for trying to integrate the capabilities into the application itself versus leaving them separate.

In the event that there is uncertainty on the end user value of having the capability, implementing the insights as a side car / standalone application, then looking to integrate them within the application as an integrated capability a second step may be the best approach. 

If a significant amount of redesign or modernization is required to directly integrate the capabilities, it may make sense to either evaluate market alternatives as a replacement to the internal application or to leave the insights separate entirely.  Similar to purchased products, the insights should be delivered as a service and integrated into the application versus being built as an enhancement to provide greater flexibility for how they are leveraged and to simplify migrations to a different solution in the future.

The third scenario in the diagram above is meant to reflect a separate insights application that is then folded into the custom application as a service over time, so that it is a more seamless experience for the end user over time.

Either way, whether it be a purchased or custom-built solution, the important points are to decouple the insights from the applications to provide flexibility, but also to think about both providing a front-end for users to interact with the applications, but also to allow for a service-based approach as well, so that an agent acting on behalf of the user or the system itself could orchestrate various capabilities exposed from that application without the need for user intervention.

 

From Disconnected to Integrated Insights (3)

One of the reasons for separating out these various migration scenarios is to highlight the risk that introducing too many sidecar or special/single purpose applications could cause significant complexity if not managed and governed carefully.  Insights should serve a process or need, and if the goal is to make a user more productive, effective, or safer, those capabilities should ultimately be used to create more intelligent applications that are easier to use.  To that end, there likely would be value in working through a full product lifecycle when introducing new capabilities, to determine whether it is meant to be preserved, integrated with a core application (as a service), or tested and possibly decommissioned once a more integrated capability is available.

 

Summing Up

While the experience of a consumer of technology likely will change and (hopefully) become more intuitive and convenient with the introduction of AI and agents, the need to be thoughtful in how we develop an application architecture strategy, leverage components and services, and put the end user first will be priorities if we are going to obtain the value of these capabilities at an enterprise level.  Intelligent applications is where we are headed and our ability to work with an integrated vision of the future will be critical to realizing the benefits available in that world.

The next article will focus on how we should think about the data and analytics environment in the future state.

Up Next: Deconstructing Data-Centricity

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/30/2025

The Intelligent Enterprise 2.0 – Integrating Artificial Intelligence

“If only I could find an article that focused on AI”… said no one, any time recently.

In a perfect world, I don’t want “AI” anything, I want to be able to be more efficient, effective, and competitive.  I want all of my capabilities to be seamlessly folded into the way people work so they become part of the fabric of the future environment.  That is why having an enterprise-level blueprint for the future is so critically important.  Things should fit together seamlessly and they often don’t, especially when we don’t design with integration in mind from the start.  That friction slows us down, costs us more, and makes us less productive than we should be.

This is the third post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

Natural Language First (1)

I don’t own an Alexa device, but I have certainly had the experience of talking to someone who does, and heard them say “Alexa, do this…”, then repeat themself, then repeat themself again, adjusting their word choice slightly, or slowing down what they said, with increasing levels of frustration, until eventually the original thing happens. 

These experiences of voice-to-text and natural language processing have been anything but frictionless: quite the opposite, in fact.  With the advent of large language models (LLMs), it’s likely that these kinds of interactions will become considerably easier and more accurate, along with the integration of written and spoken input being a means to initiate one or more actions from an end user standpoint.

Is there a benefit?  Certainly.  Take the case of a medical care provider directing calls to a centralized number for post-operative and case management follow ups.  A large volume of calls needs to be processed and there are qualified medical personnel available to handle them on a prioritized basis.  The technology can play the role of a silent listener, both recording key points of the conversation and recommended actions (saving time in documenting the calls), and also making contextual observations integrated with the healthcare worker’s application (providing insights) to potentially help address any needs that arise mid-discussion.  The net impact could be a higher volume of calls processed due to the reduction in time documenting calls and improved quality of care from the additional insights provided to the healthcare professional.  Is this artificial intelligent replacing workers?  No, it is helping them be more productive and effective, by integrating into the work they are already doing, reducing the lower value add activities and allowing them to focus more on patient care.

If natural language processing can be integrated such that comprehension is highly accurate, I can foresee where a large amount of end user input could be provided this way in the future.  That being said, the mechanics of a process and the associated experience still need to be evaluated so that it doesn’t become as cumbersome as some voice response mechanisms in place today can be, asking you to “say or enter” a response, then confirming what you said back to you, then asking for you to confirm that, only to repeat this kind of process multiple times.  No doubt, there is a spreadsheet somewhere to indicate savings for organizations in using this kind of technology by comparison with having someone answer a phone call.  The problem is that there is a very tedious and unpleasant customer experience on the other side of those savings, and that shouldn’t be the way we design our future environments.

Orchestration is King (2)

Where artificial intelligence becomes powerful is when it pivots from understanding to execution.

Submitting a natural language request, “I would like to…” or “Do the following on my behalf…”, having the underlying engine convert that request to a sequence of actions, and then ultimately executing those requests is where the power of orchestration comes in.

Back to my earlier article on The Future of IT from March of 2024, I believe we will pivot from organizations needing to create, own, and manage a large percentage of their technology footprint to largely becoming consumers of technologies produced by others, that they configure to enable their business rules and constraints and that they orchestrate to align with their business processes.

Orchestration will exist on four levels in the future:

  • That which is done on behalf of the end user to enable and support their work (e.g., review messages, notifications, and calendar to identify priorities for my workday)
  • That which is done within a given domain to coordinate transaction processing and optimize leverage of various components within a given ecosystem (e.g., new hire onboarding within an HR ecosystem or supplier onboarding within the procurement domain)
  • That which is done across domains to coordinate activity that spans multiple domains (e.g., optimizing production plans coming from an ERP systems to align with MES and EAM systems in Manufacturing given execution and maintenance needs)
  • Finally, that which is done within the data and analytics environment to minimize data movement and compute while leveraging the right services to generate a desired outcome (e.g., optimizing cost and minimizing the data footprint by comparison with more monolithic approaches)

Beyond the above, we will also see agents taking action on behalf of other, higher-level agents, where there is more of a heirarchical relationship where a process is decomposed into subtasks executed (ideally in parallel) to serve an overall need.

Each of these approaches refer back to the concept of leveraging defined ecosystems and standard integration as discussed in the previous article on the overarching framework.

What is critical is to think about this as a journey towards maturing and exposing organizational capabilities.  If we assume an end user wants to initiate a set of transactions through a verbal command, that then is turned in a process to be orchestrated on their behalf, we need to be able to expose the services that are required to ultimately enable that request, whether that involves applications, intelligence, data, or some combination of the three.  If we establish the underlying framework to enable this kind of orchestration, however it is initiated, through an application, an agent, or some other mechanism, we could theoretically plug new capabilities into that framework to expand our enterprise-level technology capabilities more and more over time, creating exponential opportunity to make more of our technology investments.  The goal is to break down all the silos and make every capability we have accessible to be orchestrated on behalf of an end user or the organization.

I met with a business partner not that long ago who was a strong advocate for “liberating our data”.  My argument would be that the future of an intelligent enterprise should be to “liberate all of our capabilities”.

Insights, Agents, and Experts (3)

Having focused on orchestration, which is a key capability within agentic solutions, I did want to come back to three roles that I believe AI can fulfill in an enterprise ecosystem of the future, they are:

  • Insights – observations or recommendations meant to inform a user to make them more productive, effective, or safer
  • Agents – applications that orchestrate one or more activities on behalf of or in concert with an end user
  • Experts – applications that act as a reference for learning and development and to serve as a representation of the “ideal” state either within a given domain (e.g., a Procurement “Expert” may have accumulated knowledge of both best practices, market data, and internal KPIs and goals that allow end users and applications to interact with it as an interactive knowledge base meant to help optimize performance) or across domains (i.e., extending the role of a domain-based expert to be broader to focus on enterprise-level objectives and to help calibrate the goals of individual domains to help achieve those overall outcomes more effectively)

I’m not aware of the “Expert” type capabilities existing for the most part today, but I do believe having more of an autonomous entity that can provide support, guidance, and benchmarking to help optimize performance of individuals and systems could be a compelling way to leverage AI in the future.

AI as a Service (4)

I will address how AI should be integrated into an application portfolio in the next article, but I felt it was important to clarify that I believe that, while AI is being discussed as an objective, a product, and an outcome in many cases today, it is important to think of it as a service that lives and is developed as part of a data and analytics capability.  This feels like the right logical association because the insights and capabilities associated with AI are largely data-centric and heavily model dependent, and that should live separate from applications meant to express those insights and capabilities to an end user.

Where the complicating factor could arise from my experience is in how the work is approached and the capabilities of the leaders charged with AI implementation, something I will address in the seventh article in this series on organizational consideration.

Suffice is to say that I see AI as an application-oriented capability, even though it is heavily dependent on data and your underlying model.  To the extent that a number of data leaders can come from a background focused on storage, optimization, and performance of traditional or even advanced analytics/data science capabilities, they may not be ideal candidates to establish the vision for AI, given it benefits from more of an outside-in (consumer-driven) mindset than an inside-out (data-focused) approach.

Summing Up

With all the attention being given to AI, the main purpose of breaking it down in the manner I have above it to try and think about how we integrate and leverage it within and across an enterprise, and most importantly: not to treat it as a silo or a one-off.  That is not the right way to approach AI moving forward.  It will absolutely become part of the way people work, but it is a capability like many other in technology, and it is critically important that we continue to start with the consumers of technology and how we are making them more productive, effective, safe, and so on.

The next two articles will focus on how we integrate AI into the application and data environments.

Up Next: Evolving Applications

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/28/2025

The Intelligent Enterprise 2.0 – A Framework for the Future

Why does it take so much time to do anything strategic?

This not an uncommon question to hear in technology and, more often than not, the answer is relatively simple: because the focus in the organization is delivering projects and not establishing an environment to facilitate accelerated delivery at scale.  Those are two very different things and, unfortunately, it requires more thought, partnership, and collaboration between business and technology teams than headlines like “let’s implement product teams” or “let’s do an Agile transformation” imply.  Can those mechanisms be part of how you execute in a scaled environment?  Absolutely, but choosing an operating approach and methodology shouldn’t precede or take priority over having a blueprint to begin with, and that’s the focus of this article.

This is the second post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

Design Dimensions

In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design.  I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last.  The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.

User-Centered Design (1)

A firm conducts a research project on behalf of a fast-food chain.  There is always a “coffee slick” in the drive thru, where customers stop and pour out part of their beverage.  Is there something wrong with the coffee?  No.  Customers are worried about burning themselves.  The employees are constantly overfilling the drink, assuming that they are being generous, but actually creating a safety concern by accident.  Customers don’t want to spill their coffee, so that excess immediately goes to waste.

In the world of technology, the idea of putting enabling technology in the hands of “empowered” end users or delivering that one more “game changing” tool or application is tempting, but often doesn’t deliver the value that is originally expected.  This can occur for a multitude of reasons, but what can often be the case is an inadequate understanding of the end user’s mental model, approach to performing their work, or a fundamental misunderstanding that more technology in the hands of a user is always a good thing (when it definitely isn’t the case).

Two learnings that came out of the dotcom era were, first, the value to be derived from investing in user-centered design, thinking through their needs and workflow, and designing experiences around that.  The other common practice was assuming that a disruptive technology (the internet in this case) was cause to spin out a separate organization (the “eBusiness” teams of the time) meant to incubate and accelerate development of capabilities that embraced the new technology.  These teams generally lacked a broader understanding of the “traditional” business and its associated operating requirements, and thus began the ”bricks versus clicks” issues and channel conflict that eventually led to these capabilities being folded back into the broader organization, but only after having spent time and (in many cases) a considerable amount of money experimenting without producing sustainable business value.

In the case of artificial intelligence, it’s tempting to want to stand up new organizations or repurpose existing ones to mobilize the technology or to assume everything will eventually be relegated to a natural language-based interface where a user provides a description of what they want into an agent acting at their personal virtual assistant, with system deriving the appropriate workflow and orchestrating the necessary actions to support the request.  While that may be a part of our future reality, taking an approach similar to the dotcom era would be a mistake and there will lost opportunity where this is the chosen path.

To be effective in a future digital world with AI, we need to think of how we want things integrated at the outset, starting with that critical understanding of the end user and how they want to work.  Technology is meant to enable and support them, not the other way around, and leading with technology versus a need is never going to be an optimal approach… a lesson we’ve been shown many times over the years, no matter how disruptive the technology advancement has been.

I will address some of the organizational implications of the model in the seventh article in this series, so the remainder of this post will be on the technology framework itself.

Designing Around Connected Ecosystems (2)

Domain-driven design is not a new concept in technology.

As I mentioned in the first article, the technology footprint of most medium- to large-scale organizations is complex and normally steeped in redundancies, varied architectures, unclear boundaries between custom and purchased software, hosted and cloud-based based environments, hard coded integrations, and standard ways of moving data within and across domains.

While package solutions offer some level of logical separation of concerns between different business capabilities, the natural tendency in the product and platform space is to move towards more vertically or horizontally integrated solutions that create customer lock in and make interoperability very challenging, particularly in the ERP space.  What also tends to occur is that an organization’s data model is biased to conform to what works best for a package or platform, but not necessarily the best representation of their business or their consumers of technology (something I will address in article 5 on Data & Analytics).

In terms of custom solutions, given they are “home grown”, there is a reasonable probability that, unless they were well-architected at the outset, they very likely provide multiple capabilities, without clear separation of concerns in ways that make them difficult to integrate with other systems in “standard” ways.

While there is nothing unique about these kinds of challenges, the problem comes when new technology capabilities like AI are available and we want to either replace or integrate things in a different way.  This is where the lack of enterprise-level design and a broader, component-based architecture takes its toll, because there likely will be significant remediation, refactoring, and modernization required to enable existing systems to interoperate with the new capabilities.  These things take time, add risk, and ultimately cost to our ability to respond when these opportunities arise, and no one wants to put new plumbing in a house that is already built with a family living in it.

On the other hand, in an environment with defined, component-based ecosystems that uses standard integration patterns, replacing individual components becomes considerably easier and faster, with much less disruption at both a local- and an enterprise-level.  In a well-defined, component-based environment, I should be able to replace my Talent Acquisition application without having to impact my Performance Management, Learning & Development, or Compensation & Benefits solutions from an HR standpoint.  Similarly, I shouldn’t need to make changes to my Order Management application within my Sales ecosystem because I’m transitioning to a different CRM package.  To the extent that you are using standard business objects to support integration across systems, the need to update downstream systems in other domains should be minimized as well.  Said differently, if you want to be fast, be disciplined in your design.

Modular, Composable, Standardized (3)

Beyond designing towards a component-based environment, it is also important to think about capabilities independent of a given process so you have more agility in how you ultimately leverage and integrate different things over time. 

Using a simple example from personal lines insurance, I want to be able to support a third-party rating solution by exposing a “GetQuote” function that takes necessary customer information and coverage-related parameters and sends back a price.  From a carrier standpoint, the process may involve ordering credit and pulling a DMV report (for accidents and violations) as inputs to the process.  I don’t necessarily want these capabilities to be developed internal to the larger “GetQuote”, because I may want to leverage them for any one of a number of other reasons, so that smaller grained (more atomic) transactions should also be defined as services that can be leveraged by the larger one.  While this is a fairly trivial case, there are often situations where delivery efforts move at such a rapid pace that things are tightly coupled or built together that really should be discreet and separate, providing more flexibility and leverage of those individual services over time.

This also can occur in the data and analytics space, where there are normally many different tools and platforms between the storage and consumption layers and ideally you want to optimize data movement and computing resources such that only the relevant capabilities are included in a data pipeline based on specific customer needs.

The flexibility described above is predicated on a well-defined architecture that is service-based and composable, with standard integration patterns, that leverages common business objects for as many transactions as practical.  That isn’t to say that there are times where the economics make sense to custom code something or to leverage point-to-point integration, rather that thinking about reuse and standardized approaches up front is a good delivery practice to avoid downstream cost and complexity, especially when the rate of new technologies being introduced is as high as it is today.

Leveraging Standard Integration (4)

Having mentioned standard integration above, my underlying assumption is that we’re heading towards a near real-time environment where streaming infrastructure and publish and subscribe models are going to be critical infrastructure to enable delivery of key insights and capabilities to consumers of technology.  To the extent that we want that infrastructure to scale and work efficiently and consistently, there is a built-in incentive to be intentional about the data we transmit (whether that is standard business objects or smaller data sets coming from connected equipment and devices) as well as the ways we connect to these pipelines across application and data solutions.  Adding a data publisher or consumer shouldn’t require rewriting anything per se, any more than plugging in a new appliance to a power outlet in your home should require you to either unplug something else or change the circuit board and wiring itself (except in extreme cases).

Summing Up

I began this article with the observation about delivering projects by comparison with establishing an environment for delivering repeatably at scale.  In my experience, depending on the scale of an organization, there will be some level of many of the things I’ve mentioned above in place, but then a potentially large set of pain points and opportunities across the footprint where things are suboptimized.

This is not about boiling the ocean or suggesting we should start over.  The point of starting with the framework itself is to raise awareness that the way we establish the overall environment has a significant ripple effect into our ability to do things we want to do downstream to leverage new capabilities and get the most out of our technology investments later on.  The time spent in design is well worth the investment, so long as it doesn’t become analysis paralysis.

To that end, in summary:

  • Design from the end user and their needs first
  • Think and design with connected ecosystems in mind
  • Be purposeful in how you design and layer services to promote reuse and composability
  • Leverage standards in how you integrate solutions to enable near real-time processing

It is important to note that, while there are important considerations in terms of hosting, security, and data movement across platforms, I’m focusing largely on the organization and integration of the portfolios needed to support an organization.  From a physical standpoint, the conceptual diagram isn’t meant to suggest that any or all of these components or connected ecosystems need to be managed and/or hosted internal to an organization.  My overall belief is that, the more we move to a service-driven environment, the more of a producer/consumer model will emerge where corporations largely act as an integrator and orchestrator (aka “consumer”) of services provided by third-parties (the “producers”).  To the extent that the architecture and standards referenced above are in place, there shouldn’t be any significant barriers to moving from a more insourced and hosted environment to a more consumption-based, outsourced, and cloud-native environment in the future.

With the overall framework in place, the next three articles will focus on the individual elements of the environment of the future, in terms of AI, applications, and data.

Up Next: Integrating Artificial Intelligence

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/26/2025

The Intelligent Enterprise 2.0 – The Cost of Complexity

How did we get here…?

The challenges involved in managing a technology footprint today at any medium to large organization are very high for a multitude of reasons:

  • Proliferation of technologies and solutions that are disconnected or integrated in inconsistent ways, making simplification or modernization efforts difficult to deliver
  • Mergers and acquisitions that bring new systems into the landscape that aren’t rationalized with or migrated to existing systems, creating redundancy, duplication of capabilities, and cost
  • “Speed-to-market” initiatives involving unique solution approaches that increase complexity and cost of ownership
  • A blend of in-house and purchased software solutions, hosted across various platforms (including multi-cloud), increasing complexity and cost of security, integration, performance monitoring, and data movement
  • Technologies advancing at a rate, especially with the introduction of artificial intelligence (AI), that organizations can’t integrate them quickly enough to do so in a consistent manner
  • Decentralized or federated technology organizations that operate with relative autonomy, independent of standards, frameworks, or governance, which increases complexity and cost

The result of any of the above factors can be enough cost and complexity that the focus within a technology organization can shift from innovation and value creation to struggling to keep the lights on and maintaining a reliable and secure operating environment.

This article will be the first in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”.  The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.

 

Why It Matters

Before getting into the dimensions of the future state, I wanted to first clarify how these technology challenges manifest themselves in meaningful ways, because complexity isn’t just an IT problem, it’s a business issue, and partnership is important in making thoughtful choices in how we approach future solutions.

 

Lost Productivity

A leadership team at a manufacturing facility meets first thing in the morning.  It is the first of multiple they will have throughout the course of a day.  They are setting priorities for the day collectively because the systems that support them: a combination of applications, analytics solutions, equipment diagnostics, and AI tools, are all providing different perspectives on priorities and potential issues, but in disconnected ways, and it is now on the leadership team to decide which of these should receive attention and priority in the interest of making their production targets for a day.  Are they making the best choices in terms of promoting efficiency, quality, and safety?  There’s no way to know.

Is this an unusual situation?  Not at all.  Today’s technology landscape is often a tapestry of applications with varied levels of integration and data sharing, data apps and dashboards meant to provide insights and suggestions, and now AI tools to “assist” or make certain activities more efficient for an end user.

The problem is what happens when all these pieces end up on someone’s desktop, browser, or mobile device and they are left to copy data from one solution to the other, arbitrate which of various alerts and notifications is most important, identify dependencies to help make sure they are taking the right actions in the right sequence (in a case like directed work activity), and quite often that time is lost productivity in itself, regardless of which path they take, which may amplify the impact further, given retention and/or high turnover are real issues in some jobs that reduce the experience available to navigate these challenges successfully.

 

Lower Profitability

The result of this lost productivity and ever-expanding technology footprint is both lost revenue (to the extent it hinders production or effective resource utilization) and higher operating cost, especially to the degree that organizations introduce the next new thing without retiring or replacing what was already in place, or integrating things effectively.  Speed-to-market is a short-term concept that tends to cause longer-term cost of ownership issues (as I previously discussed in the article “Fast and Cheap Isn’t Good”), especially to the degree that there isn’t a larger blueprint in place to make sure such advancements are done in a thoughtful, deliberate manner.

To this end, how we do something can be as important as what we intend to do, and there is an argument for thinking through the operating implications when undertaking new technology efforts with a more holistic mindset than a single project tends to take in my experience.

 

Lost Competitive Advantage

Beyond the financial implications, all of the varied solutions, accumulated technologies and complexity, and custom or interim band aids built to connect one solution to the next eventually catches up in a form of what one organization used to refer to as “waxy buildup” that prevents you from moving quickly on anything.  What seems on paper to be a simple addition or replacement becomes a lengthy process of analysis and design that is cumbersome and expensive, where the lost opportunity is speed-to-market in an increasingly competitive marketplace. 

This is where the new market entrants thrive and succeed, because they don’t carry the legacy debt and complexity of entrenched market players who are either too slow to respond or too resistant to change to truly transform at a level that allows them to sustain competitive advantage.  Agility gives way to a “death by a thousand paper cuts” of tactical decisions made that were appropriate and rational in the moment, but created significant amounts of technical debt that inevitably must be paid.

 

A Vision for the Future

So where does this leave us?  Pack up the tent and go home?  Of course not.

We are at a significant inflection point with AI technology that affords us the opportunity to examine where we are and to start adjusting our course to a more thoughtful and integrated future state where AI, applications, and data and analytics solutions work in concert and harmony with each other versus in a disconnected reality of confusion.

It begins with the consumers of these capabilities, supported by connected ecosystems of intelligent applications, enabled by insights, agents, and experts, that infuse intelligence into making people productive, businesses agile and competitive, and improve value derived from technology investments at a level disproportionate to what we can achieve today.

The remaining articles in this series will focus on various dimensions of what the above conceptual model means, as a framework, in terms of AI, applications, and data, and then how we approach that transition and think about it from an IT organizational perspective.

Up Next: Establishing the Framework for the future…

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/24/2025

Approaching AI Strategy

Overview

In my first blog article back in 2021, I wrote that “we learn to value experience only once we actually have it”… and one thing I’ve certainly realized is that it’s much easier to do something quickly than to do it well.  The problem is that excellence requires discipline, especially when you want to scale or have sustainable results, and that often comes into conflict with a natural desire to achieve speed in delivery.

There is a tremendous amount of optimism in the transformative value AI can create across a wide range of areas.  While much continues to be written about various tools, technologies, and solutions, there is value in having a structured approach to developing AI strategy and how we will govern it once it is implemented across an organization.

Why?  We want results.

Some historical examples on why there is a case for action:

  • Many organizations have leveraged SharePoint as a way to manage documents. Because it’s relatively easy to use, access to the technology generally is provided to a broad set of users, with little or no guidance on how to use it (e.g., metatagging strategy), and over time there becomes a sprawl of content that may contain critical, confidential, or proprietary information with limited overall awareness of what exists and where
  • In the last number of years, Citizen Development has become popular, with the rise of low code, no code, and RPA tools, creating accessibility to automation that is meant to enable business (and largely non-technical) resources to rapidly create solutions, from the trivial to relatively complex. Quite often these solutions aren’t considered part of a larger application portfolio, are managed with little or no oversight, and become difficult to integrate, leverage, or support effectively
  • In data and analytics, tools like Alteryx can be deployed across a broad set of users who, after they are given access to requested data sources, create their own transformations, dashboards, and other analytical outputs to inform ongoing business decisions. The challenge occurs when the underlying data changes, is not understood properly (and downstream inferences can be incorrect), or these individuals leave or transition out of their roles and the solutions they built are not well understood or difficult for someone else to leverage or support

What these situations have in common is the introduction of something meant to serve as an enabler that has relative ease of use and accessibility across a broad audience, but where there also may be a lack of standards and governance to make sure the capabilities are introduced in a thoughtful and consistent manner, leading to inefficiency, increased cost, and lost opportunity.  With the amount of hype surrounding AI, the proliferation of tools, and general ease of use that they provide, the potential for organizations to create a mess in the wake of their experimentation with these technologies seems very significant. 

The focus of the remainder of this article is to explore some dimensions to consider in developing a strategy for the effective use and governance of AI in an organization.  The focus will be on the approach, not the content of an AI strategy, which can be the subject of a later article.  I am not suggesting that everything needs to be prescriptive, cumbersome, or bureaucratic to the point that nothing can get done, but I believe it is important to have a thoughtful approach to avoid the pitfalls that are common to these situations.

To the extent that, in some organizations, “governance” implies control versus enablement or there are historical real or perceived IT delivery issues, there may be concern with heading down this path.  Regardless of how the concepts are implemented, I believe they are worth considering sooner rather than later, given we are still relatively early in the adoption process of these capabilities.

Dimensions to Consider

Below are various aspects of establishing a strategy and governance process for AI that are worth consideration.  I listed them somewhat in a sequential manner, as I’d think about them personally, though that doesn’t imply you can’t explore and elaborate as many as are appropriate in parallel, and in whatever order makes sense.  The outcome of the exercise doesn’t need to be rigid mandates, requirements, or guidelines per se, but nearly all of these topics likely will come up implicitly or otherwise as we delve further into leveraging these technologies moving forward.

Lead with Value

The first dimension is probably the most important in forming an AI strategy, which is to articulate the business problems being solved and value that is meant to be created.  It is very easy with new technologies to focus on the tools and not the outcomes and start implementing without a clear understanding of the impact that is intended.  As a result, measuring the value created and governing the efficacy of the solutions delivered becomes extremely difficult.

As a person who does not believe in deploying technology for technology’s sake, identifying, tracking, and measuring impact is important in knowing we will ultimately make informed decisions in how we leverage new capabilities and invest in them appropriately over time.

Treat Solutions as Assets

Along the lines of the above point, there is risk associated with being consumed by what is “cool” versus what is “useful” (something I’ve written about previously), and treating new technologies like “gadgets” versus actual business solutions.  Where we treat our investments as assets, the associated discipline we apply in making decisions surrounding them should be greater.  This is particularly important in emerging technology because the desire to experiment and leverage new tools could quickly become unsustainable as the number of one-off solutions grows and is unsupportable, eventually draining resources from new innovation.

Apply a Lifecycle Mindset

When leveraging a new technical capability, I would argue that we should look for opportunities to think of the full product lifecycle when it comes to how we identify, define, design, develop, manage, and retire solutions.  In my experience, the identify (finding new tools) and develop (delivering new solutions) aspects of the process receive significant emphasis in a speed-to-market environment, but the others much less so, and often to the overall detriment of an organization when they quickly are saddled with the resulting technical debt that comes from neglecting some of the other steps in the process.  This doesn’t necessarily imply a lot of additional steps, process overhead, or time/effort to be expended, but there is value created in each step of a product lifecycle (particularly in the early stages) and all of them need to be given due consideration if you want to establish a sustainable, performant environment.  The physical manifestation of some these steps could be as simple as a checklist to make sure there aren’t blind spots that arise later on that were avoidable or that create business risk.

Define Operating Model

Introducing new capabilities, especially ones where the barrier to entry/ease of use allows for a wide audience of users can cause unintended consequences if not managed effectively.  While it’s tempting to draw a business/technology dividing line, my experience has been that there can be very technically capable business consumers of technology and very undisciplined technologists who implement it as well.  The point of thinking through the operating model is to identify roles and responsibilities in how you will leverage new capabilities so that expectations and accountability is clear, along with guidelines for how various teams are meant to collaborate over the lifecycle mentioned above.

Whether the goal is to “empower end users” by fully distributing capabilities across teams, with some level of centralized support and governance, or fully centralizing with decentralized demand generation (or any flavor in between), the point is to understand who is best positioned to contribute at different steps of the process and promote consistency to an appropriate level so performance and efficacy of both the process and eventual solutions is something you can track, evaluate, and improve over time.  As an example, it would likely be very expensive and ineffective to hire a set of “prompt engineers” that operate in a fully distributed manner in a larger organization by comparison with having a smaller, centralized set of highly skilled resources who can provide guidance and standards to a broader set of users in a de-centralized environment.

Following onto the above, it is also worthwhile to decide whether and how these kinds of efforts should show up in a larger portfolio management process (to the extent one is in place).  Where AI and agentic solutions are meant to displace existing ways of working or produce meaningful business outcomes, the time spent delivering and supporting these solutions should likely be tracked so there is an ability to evaluate and manage these investments over time.

Standardize Tools

This will likely be one of the larger issues that organizations face, particularly given where we are with AI in a broader market context today.  Tools and technologies are advancing at such a rapid rate that having a disciplined process for evaluating, selecting, and integrating a specific set of “approved” tools is and will be challenging for some time.

While asking questions of a generic large language model like ChatGPT, Grok, DeepSeek, etc. and changing from one to the other seems relatively straightforward, there is a lot more complexity involved when we want to leverage company-specific data and approaches like RAG to produce more targeted and valuable outcomes.

When it comes to agentic solutions, there is also a proliferation of technologies at the moment.  In these cases, managing the cost, complexity, performance, security, and associated data privacy issues will also become complex if there aren’t “preferred” technologies in place and “known good” ways in which they can be leveraged.

Said differently, if we believe effective use of AI is critical to maintaining competitive advantage, we should know that the tools we are leveraging are vetted, producing quality results, and that we’re using them effectively.

Establish Critical Minimum Documentation

I realize it’s risky to use profanity in a professional article, but documentation has to be mentioned if we assume AI is a critical enabler for businesses moving forward.  Its importance can probably be summarized if you fast forward one year from today, hold a leadership meeting, and ask “what are all the ways we are using artificial intelligence, and is it producing the value we expected a year ago?”  If the response contains no specifics and supporting evidence, there should be cause for concern, because there will be significant investment made in this area over the next 1-2 years, and tracking those investments is important to realizing the benefits that are being promised everywhere you look.

Does “documentation” mean developing a binder for every prompt that is created, every agent that’s launched, or every solution that’s developed?  No, absolutely not, and that would likely be a large waste of money for marginal value.  There should be, however, a critical minimum amount of documentation that is developed in concert with these solutions to clarify their purpose, intended outcome/use, value to be created, and any implementation particulars that may be relevant to the nature of the solution (e.g. foundational model, data sets leveraged, data currency assumptions, etc.).  An inventory of the assets developed should exist, minimally so that it can be reviewed and audited for things like security, compliance, IP, and privacy-related concerns where applicable.

Develop Appropriate Standards

There are various types of solutions that could be part of an overall AI strategy and the opportunity to develop standards that promote quality, reuse, scale, security, and so forth is significant.  Whether it takes the form of a “how to” guide for writing prompts, to data sourcing and refresh standards with RAG-enabled solutions, reference architecture and design patterns across various solution types, or limits to the number of agents that can be developed without review for optimization opportunities… In this regard, something pragmatic, that isn’t overly prescriptive but that also doesn’t reflect a total lack of standards would be appropriate in most organizations.

In a decentralized operating environment, the chance that solutions will be developed in a one-off fashion, with varying levels of quality, consistency, and standardization is highly probable and that could create issues with security, scalability, technical debt, and so on.  Defining the handshake between consumers of these new capabilities and those developing standards, along with when it is appropriate to define them, could be important things to consider.

Design Solutions

Again, as I mentioned in relation to the product lifecycle mindset, there can be a strong preference to deliver solutions without giving much thought to design.  While this is often attributed to “speed to market” and a “bias towards action”, it doesn’t take long for tactical thinking to lead to a considerable amount of technical debt, an inability to reuse or scale solutions, or significant operating costs that start to slow down delivery and erode value.  These are avoidable consequences when thought is given to architecture and design up front and the effort nearly always pays off over time.

Align to Data Strategy

This topic could be an article in itself, but suffice is to say that having an effective AI strategy is heavily dependent on an organization’s overall data strategy and the health of that portfolio.  Said differently: if your underlying data isn’t in order, you won’t be able to derive much in terms of meaningful insights from it.  Concerns related to privacy and security, data sourcing, stewardship, data quality, lineage and governance, use of multiple large language models (LLMs), effective use of RAG, the relationship of data products to AI insights and agents, and effective ways of architecting for agility, interoperability, composability, evolution, and flexibility are all relevant topics to be explored and understood.

Define and Establish a Governance Process

Having laid out the above dimensions in terms of establishing and operationalizing an AI strategy, there needs to be a way to govern it.  The goal of governance is to achieve meaningful business outcomes by promoting effective use and adoption of the new capabilities, while managing exposure related to introducing change into the environment.  This could be part of an existing governance process or set up in parallel and coordinated with others in place, but the point is that you can’t optimize what you don’t monitor and manage, and the promise of AI is such that we should be thoughtful about how we govern its adoption across an organization.

Wrapping Up

I hope the ideas were worth considering.  For more on my thoughts on AI in particular, my articles Exploring Artificial Intelligence and Bringing AI to the End User can provide some perspective for those who are interested.

Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/17/2025

Ethics and Technology

Overview

Having worked both in consulting and corporate environments for many years and across multiple industries, we’re at an interesting juncture in how technology is leveraged at a macro-level and the broader business and societal impacts of those choices.  Whether that is AI-generated content that could easily be mistaken for “real” events to the leverage of data collection and advanced analytics in various business and consumer scenarios to create “competitive advantage”.

The latter scenario has certainly been discussed for quite a while, whether it is in relation to managing privacy while using mobile devices, trusting a search engine and how they “anonymize” your data before potentially selling it to third parties, or whether the results presented as the outcome of a search (or GenAI request) is objective and unbiased, or being presented with some level of influence given the policies or leanings of the organization sponsoring it.

 

The question to be explored here is: How do we define the ethical use of technology?

 

For the remainder of this article, I’ll suggest some ways to frame the answer in various dimensions, acknowledging that this isn’t a black-and-white issue and the specifics of a situation could make some of the considerations more or less relevant.

 

Considerations

What Ethical Use Isn’t

Before diving into what I believe could be helpful in framing an answer, I wanted to clarify what I don’t consider a valid approach, which is namely the argument used in a majority of cases where individuals or organizations cross a line: We can use technology in this way because it gives us competitive advantage.

Competitive advantage can tend to be an easy argument in the interest of doing something questionable, because there is a direct or indirect financial benefit associated with the decision, thereby clouding the underlying ethics of the decision itself.  “We’re making more money, increasing shareholder value, managing costs, increasing profitability, etc.” are things that tend to move the needle in terms of the roadblocks that can exist in organizations and mobilizing new ideas.  The problem is that, with all of the data collected in the interest of securing approval and funding for initiative, I haven’t seen many cases where there is a proverbial “box to check” in terms of the effort conforming to ethical standards (whether that’s specific to technology use or otherwise).

 

What Ethical Use Could Be

That point having been stated, below are some questions that could be considered as part of an “ethical use policy”, understanding that not all may have equal weight in an evaluation process.

They are:

  • Legal/Compliance/Privacy
    • Does the ultimate solution conform to existing laws and regulations for your given industry?
    • Is there any pending legislation related to the proposed use of technology that could create such a compliance issue?
    • Is there any industry-specific legislation that would suggest a compliance issue if it were logically applied in a new way that relates to the proposed solution?
    • Would the solution cause a compliance issue in another industry (now or in legislation that is pending)? Is there risk of that legislation being applied to your industry as well?

 

  • Transparency
    • Is there anything about the nature of the solution that, were it shared openly (e.g., through a press release or industry conference/trade show) would cause customers, competitors, or partners/suppliers to raise issues with the organization’s market conduct or end user policies? This can be a tricky item given the previous points on competitive advantage and what might be labeled as “trade secret” but potentially violate anti-trust, privacy, or other market expectations
    • Does anything about the nature of the solution, were it to be shared openly, suggest that it could cause trust issues between customers, competitors, suppliers, or partners with the organization and, if so, why?

 

  • Cultural
    • Does the solution align to your organization’s core values? As an example, if there is a transparency concern (above) and “Integrity” is a core value (which it is in many organizations), why does that conflict exist?
    • Does the solution conform to generally accepted practices or societal norms in terms of business conduct between you and your target audience (customers, vertical or horizontal partners, etc.)?

 

  • Social Responsibility
    • Does the solution create any potential issues from an environmental, societal, or safety standpoint that could have adverse impacts (direct or indirect)?

 

  • Autonomy and Objectivity
    • Does the solution provide an unbiased, fact-based (or analytically-correct) outcome, free of any potential bias, that can also be governed, audited, and verified? This is an important dimension to consider given the dependency we have on automation continues to increase and we want to be able to trust the security, reliability, accuracy, and so on of what that technology provides.

 

  • Competitive
    • If a competitor announced they were developing a solution of exactly the same nature as what is proposed, would it be comfortable situation or something that you would challenge as unethical or unfair business practice in any way? Quite often, the lens through which unethical decisions are made is biased with an internal focus.  If that line of sight were reversed and a competitor was open about doing exactly the same thing, would that be acceptable or not?  If there would be issues, likely there might be cause for concern in developing the solution yourself

 

Wrapping Up

From a process standpoint, a suggestion would be to take the above list and discuss it openly in the interest of not only determining the right criteria for you, but also to establish where these opportunities exist (because they do and will, the more analytics and AI-focused capabilities advance).  Ultimately, there should be a check-and-balance process for ethical use of technology in line with any broader compliance and privacy-related efforts that may exist within an organization today.

Ultimately, the “right thing to do” can be a murky and difficult question to answer, especially with ever-expanding tools and technologies that create capabilities a digital business can use to its advantage.  But that’s where culture and values should still exist, not simply because there is or isn’t a compliance issue, but because reputations are made and reinforced over time through these kinds of decisions, and they either help build a brand or can damage it when the right questions aren’t explored at the right time.

It’s interesting to consider, as a final note, that most companies have an “acceptable use of IT” policy for employees, contractors, and so forth, in terms of setting guidelines for what they can or can’t do (e.g., accessing ‘prohibited’ websites / email accounts or using a streaming platform while at work), but not necessarily for technology directed outside the organization.  As we enter a new age of AI-enabled capabilities, perhaps it’s a good time to look at both.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 10/25/2024