Optimizing the Value of IT

Overview

Given the challenging economic environment, I thought it would be a good time to revisit something that was an active part of my work for several years, namely IT cost optimization.

In the spirit of Excellence by Design, I don’t consider cost optimization to be a moment in time activity that becomes a priority on a periodic (“once every X years”) or reactive basis.  Optimizing the value/cost ratio is something that should always be a priority in the interest of having disciplined operations, maintaining organizational agility, technical relevance, and competitive advantage.

In the consulting business, this is somewhat of a given, as most clients want more value for the money they spend on an annualized basis, especially if the service is something provided over a period of time.  Complacency is the fastest path to lose a client and, consequently, there is a direct incentive to look for ways to get better at what you do or provide equivalent service at a lower cost to the degree the capability itself is already relatively optimized.

On the corporate side, however, where the longer-term ramifications of technology decisions bear out in accumulated technical debt and complexity, the choices become more complex as they are less about a project, program, or portfolio and become more focused on the technology footprint, operating model, and organizational structure as a whole.

To that end, I’ll explore various dimensions of how to think about the complexity and makeup of IT from a cost perspective along with the various levers to explore in how to optimize value/cost.  I’m being deliberate in mentioning both because it is very easy to reduce costs and have an adverse impact on service quality or agility, and that’s why thoughtful analysis is important in making informed choices on improving cost-efficiency.

Framing the Problem

Before looking at the individual dimensions, I first wanted to cover the simple mental model I’ve used for many years in terms of driving operating performance:

 

The model above is based on three connected components that feed each other in a continuous cycle:

  • Transparency
    • We can’t govern what we can’t see. The first step in driving any level of thoughtful optimization is having a fact-based understanding of what is going on
    • This isn’t about seeing or monitoring “everything”. It is about understanding the critical, minimum information that is needed to make informed decisions and then obtaining as accurate a set of data surrounding those points as possible.
  • Governance
    • With the above foundation in place, the next step is to have leadership engagement to review and understand the situation, and identify opportunities to improve.
    • This governance is a critical step in any optimization effort because, if there are not sustainable organizational or cultural changes made in the course of transforming, the likelihood of things returning to a similar condition will be relatively high.
  • Improvement
    • Once opportunities are identified, executing effectively on the various strategies becomes the focus, with the goal of achieving the outcomes defined through the governance process
    • The outcomes of this work should then be reflected in the next cycle of operating metrics and the cycle can be repeated on a continuing basis.

The process for optimizing IT costs is no different than what is expressed here: understand the situation first, then target areas of improvement, make adjustments, continue.  It’s a process, not a destination.  From here, we’ll explore the various dimensions of complexity and cost within IT, and the levers to consider in adjusting them.

 

At an Operating-Level

Before delving into the footprint itself, a couple areas to consider at an overall level are portfolio management and release strategy.

 

Portfolio management

Given that I am mid-way through writing an article on portfolio management and am also planning a separate one on workforce and sourcing strategy, I won’t explore this topic much beyond saying that having a mature portfolio management process can help influence cost-efficiency

That being said, I don’t consider ineffective portfolio management to be a root cause of IT value/cost being imbalanced.  An effective workforce and sourcing strategy that aligns variable capacity to sources of demand fluctuation (within reasonable cost constraints) should enable IT to deliver significant value even during periods of increased business demand.  That being said, a lack of effective prioritization, disciplined estimation and planning, resource planning, and sourcing strategy in combination with each other can have significant and harmful effects on cost-efficiency and, therefore, generally provide opportunities for improvement.

Some questions to consider in this area:

  • Is prioritization effective in your organization? When “priority” effort arise, are other ongoing efforts stopped or delayed to account for them or is the general trend to take on more work without recalibrating existing commitments?
  • Are estimation and planning efforts benchmarked, reviewed, analyzed and improved, so the integrity of ongoing prioritization and slotting of projects can be done effectively?
  • Is there a defined workforce and sourcing strategy to align variable capacity to fluctuating demand so that internal capacity can be reallocated effectively and sourcing scaled in a way that doesn’t disproportionately have an adverse impact on cost? Conversely, can demand decline without significant need for recalibration of internal, fixed capacity?  There is a situation I experienced where we and another part of the organization took the same level of financial adjustment, but they had to make 3x the level of staffing adjustment given we were operating under a defined sourcing strategy and the other organization wasn’t.  This is an important reason to have a workforce and sourcing strategy.
  • Is resource planning handled on an FTE (e.g., role-based) or resource-basis (e.g., named resource), or some combination thereof? What is the average utilization of “critical” resources across the organization on an ongoing basis?

Release strategy

This is an area that often seems overlooked in my experience (outside product delivery environments) as a means to both improve delivery effectiveness, manage cost, and improve overall quality.

Having a structured release strategy that accounts for major and minor releases, with defined criteria and established deployment windows, versus an arbitrary or ad-hoc approach can be a significant benefit both from an IT delivery and business continuity perspective.  Generally speaking, delivery cycles (in a non-CI/CD, DevSecOps-oriented environment) tend to consume time and energy that slows delivery progress.  The more windows that exist, the more disruption that occurs over a calendar year.  When those windows are allowed to occur on an ad-hoc basis, the complexities of integration testing, configuration management, and coordination from a project, program, and change management perspective tends to increase proportional to the number of release windows involved.  Similarly, the risk of quality issues occurring within and across a connected ecosystem increases as the process for stabilizing and testing individual solutions, integrating across solutions, and managing post-deployment production issues is spread across multiple teams in overlapping efforts.  Where standard integration patterns and reference architecture is in place to govern interactions across connected components, there are means to manage and mitigate risk, but generally speaking, it’s better and more cost-effective to manage a smaller set of larger, scheduled release windows than allow a more random or ad-hoc environment to exist at scale.

 

Applications

In the application footprint, larger organizations or those built through acquisition tend to have a fairly diverse and potentially redundant application landscape, which can lead to significant cost and complexity, both in maintaining and integrating the various systems in place.  This is also true when there is a combination of significant internally (custom) developed solutions working in concert with external SaaS solutions or software packages.

Three main levers can have a significant influence along the lines of what I discuss in The Intelligent Enterprise:

  • Ecosystem Design
    • Whether one chooses to refer to this as business architecture, domain-driven design, component architecture, or something else, the goal is to identify and govern a set of well-defined connected ecosystems that are composable, made up of modular components that provide a clear business (or technical) capability or set of services
    • This is critical enabler to both optimizing the application footprint as well as promoting interoperability and innovation over time, as new capabilities can be more rapidly integrated into a standards-based environment
    • Where complexity comes about is where custom or SaaS/package solutions are integrated in a way that blurs these component boundaries and creates functional overlaps that create technical debt, redundancy, data integrity issues, etc.

 

  • Integration strategy
    • With a set of well-defined components, the secondary goal is to leverage standard integration patterns with canonical objects to promote interoperability, simplification, and ongoing evolution of the technology footprint over time.
    • Without standards for integration, an organization’s ability to adopt new, innovative technologies will be significantly hindered over time and the leverage of those investments marginalized, because of the complexity involved in bringing those capabilities into the existing environment rapidly without having refactor or rewrite a portion of what exists to leverage them.
    • At an overall level, it is hard to argue that technologies are advancing at a rate faster than any organization’s ability to adopt and integrate them, so having a well-defined and heavily leveraged enterprise integration strategy is critical to long-term value creation and competitive advantage.

 

  • Application Rationalization
    • Finally, with defined ecosystems and standards for integration, having the courage and organizational leadership to consolidate like solutions to a smaller set of standard solutions for various connected components can be a significant way to both reduce cost and increase speed-to-value over time.
    • I deliberately focused on the organizational aspects of rationalization, because one of the most significant obstacles in technology simplification is the courageous leadership needed to “pick a direction” and handle the objections that invariably result in those tradeoff decisions being made.
    • Technology proliferation can be caused by a number of things, but organizational behaviors can certainly contribute when two largely comparable solutions exist without one of them being retired solely based on resistance to change or perceived control or ownership associated with a given solution.
    • At a capability-level, evaluating similar solutions, understanding functional differences and associating the value with those dimensions is a good starting point for simplifying what is in place. That being said, the largest challenge in application rationalization doesn’t tend to be identifying the best solution, it’s having the courage to make the decision, commit the investment, and execute on the plan given “new projects” tend to get more organizational focus and priority in many companies than cleaning up what they already have in place.  In a budget-constrained environment, the new, shiny thing tends to win in a prioritization process, which is something I’ll write about in a future article.

Overall, the larger the organization, the more opportunity may exist in the application domain, and the good news is that there are many things that can be done to simplify, standardize, rationalize, and ultimately optimize what’s in place in ways that both reduce cost and increase the agility, speed, and value that IT can deliver.

 

Data

The data landscape and associated technologies, especially when considering advanced analytics, has significantly added complexity (and likely associated cost) in the last five to ten years in particular.  With the growing demand for AI/ML, NLP, and now Generative AI-enabled solutions, the ability to integrate, manage, and expose data, from producer to ultimate consumer has taken on significant criticality.

Some concepts that are directionally important in my opinion in relation to optimizing value/cost in data and analytics enablement:

  • Managing separation of concerns
    • Similar to the application environment, thinking of the data and analytics environment (OLTP included) as a set of connected components with defined responsibilities, connected through standard integration patterns is important to reducing complexity, enabling innovation, and accelerating speed-to-value over time
    • Significant technical debt can be created where the relationship of operational data stores (ODS), analytics technologies, purpose-built solutions (e.g., graph or time series databases) master data management tools, data lakes, lake houses, virtualization tools, visualization tools, data quality tools, and so on are not integrated in clear, purposeful ways.
    • Where I see value in “data centricity” is in the way it serves as a reminder to understand the value that can be created for organizations in leveraging the knowledge embedded within their workforce and solutions
    • I also, however, believe that value will be unlocked over time through intelligent applications that leverage knowledge and insights to accelerate business decisions, drive purposeful collaboration, and enable innovation and competitive advantage. Data isn’t the outcome, it’s an enabler of those outcomes when managed effectively.

 

  • Minimizing data movement
    • The larger the landscape and number of solutions involved in moving source data from the original producer (whether it’s a connected application, device, or piece of equipment) to the end consumer (however that consumption is enabled) has a significant impact on innovation and business agility.
    • As such, concepts like data mesh / data fabric, enabling distributed sourcing of data in near-real time with minimized data movement to feed analytical solutions and/or deliver end user insights is critical in thinking through a longer-term data strategy.
    • In a perfect world, where data enrichment is not a critical requirement, the ability to virtualize, integrate, and expose data across various sources to conceptually “flatten” the layers of the analytics environment is an area where end consumer value can be increased while reducing cost typically associated with ETL, storage, and compute spread across various components of the data ecosystem
    • Concepts like zero ETL, data sharing, and virtualization are also key enablers that have promise in this regard

 

  • Limiting enabling technologies
    • As in the application domain, the more diverse and complex a data ecosystem is, the likelihood that a diverse set of overlapping technologies is in place, with overlapping or redundant capabilities.
    • At a minimum, a thoughtful process for reviewing and governing any new technology introductions, to evaluate how they complement, replace, or are potentially redundant or duplicative with solutions already in place is an important capability to have in place
    • Similarly, it is not uncommon to introduce new technologies with somewhat of a “silver bullet” mindset, without considering the implications for supporting or operating those solutions, which can increase cost and complexity, or having a deliberate plan to replace or retire other solutions that provide a similar capability in the process.
    • Simply said, technical debt accumulates over time, through a set of individually rationalized and justified, but overall suboptimized short-term decisions.
  • Rationalize, simplify, standardize
    • Finally, where defined components exist, data sourcing and movement is managed, and technologies introductions are governed, there should be an ongoing effort to modernize, simplify, and standardize what is already in place.
    • Data solutions can tend to be very “purpose-built” in their orientation to the degree that the enable a specific use case or outcome. The problem that occurs in this situation is if the desired business architecture becomes the de facto technical architecture and significant complexity is created in the process.
    • Using a parallel, smaller scale analogy, there is a reason that logical and physical data modeling are separate activities in application development (the former in traditional “business design” versus the latter being part of “technical design” in waterfall-based approaches). What makes sense from a business or logical standpoint likely won’t be optimized if architected as defined in that context (e.g., most business users don’t think intuitively in third normal form, nor should they have to).
    • Modern technologies allow for relatively cheap storage and giving thought to how the underlying physical landscape should be designed from producer to consumer is critical in both enabling insight delivery at speed, but also doing so within a managed, optimized technology environment.

Overall, similar to the application domain, there are significant opportunities to enable innovation and speed-to-value in the data and analytics domain, but a purposeful and thoughtful data strategy is the foundation for being cost-effective and creating long-term value.

 

Technologies

I’ve touched on technologies through the process of discussing optimization opportunities in both the application and data domains, but it’s important to understand the difference between technology rationalization (the tools and technologies you use to enable your IT environment) and application or data rationalization (the solutions that leverage those underlying technologies to solve business problems).

The process for technology simplification is the same as described in the other two domains, so I won’t repeat the concepts here beyond reiterating that a strong package or technology evaluation process (that considers the relationship to existing solutions in place) and governance of new technology introductions with explicit plans to replace or retire legacy equivalents and ensure organizational readiness to support the new technologies in production is critical to optimizing value/cost in this dimension.

 

Infrastructure

At an overall level, unless there is a significant compliance, competitive, privacy, or legal reason to do so, I would argue that no one should be in the infrastructure business unless it IS their business.  That may be a somewhat controversial point-of-view, but at a time when cloud and hosting providers are both established and mature, arguing the differentiated value of providing (versus managing) these capabilities within a typical IT department is a significant leap of faith in my opinion.  Internal and external customer value and innovation is created in the capabilities delivered through applications, not the infrastructure, networking, and storage underlying those solutions.  This isn’t to say these capabilities aren’t a critical enabler.  They definitely are, though, the overall organizational goal in infrastructure from my perspective should be to ensure quality of service at the right cost (through third party providers to the maximum extent possible), and then manage and govern the reliability and performance of that set of environments, focusing on continuous improvement and enabling innovation as required by consuming solutions over time.

There are a significant number of cost elements associated with infrastructure, a lot of financial allocations involved, and establishing TCO through these indirect expenses can be highly complex in most organizations.  As a result, I’ll focus on three overall categories that I consider significant and acknowledge there is normally opportunity to optimize value/cost in this domain beyond these three alone (cloud, hosted solutions, and licensing).  This is partially why working with a defined set of providers and managing and governing the process can be a way to focus on quality of service and desired service levels within established cost parameters versus taking on the challenge of operationalizing a substantial set of these capabilities internally.

Certainly, a level of core network and cyber security infrastructure is necessary and critical to an organization under any circumstances, something I will touch on in a future article on the minimum requirements to run an innovation-centric IT organization, but even in those cases, that does not imply or require that those capabilities be developed or managed internally.

 

Cloud

With the ever-expanding set of cloud-enabled capabilities, there are three critical watch items that I believe have significant impact on cost optimization over time:

  • Innovation
    • Cloud platform providers are making significant advancements in their capabilities on an annual basis, some of which can help enable innovation
    • To the extent that some of the architecture and integration principles above are leveraged, and a thoughtful, disciplined process is used to evaluate and manage introduction of new technologies over time, organizations can benefit from their leverage of cloud as a part of their infrastructure strategy

 

  • Multi-cloud Integration
    • The reality of cloud providers today is also that no one is good at everything and there is differentiated value in various services provided from each of them (GCP, Azure, AWS)
    • The challenge is how to integrate and synthesize these differentiated capabilities in a secure way without either creating significant complexity or cost in the process
    • Again, having a modular, composable architecture mindset with API- or service-based integration is critical in finding the right balance for leveraging these capabilities over time
    • Where significant complexity and cost can be created is where data egress comes into play from one cloud platform to another and, consequently, the need for such data movement should be minimized in my opinion to situations where the value of doing so (ideally without persisting the data in the target platform) greatly outweighs the cost to operate in that overall environment

 

  • FinOps Discipline
    • The promise of having managed platforms that convert traditional capex to opex is certainly an attractive argument for moving away from insourced and hosted solutions to the cloud (or a managed hosting provider for that matter). The challenge is in having a disciplined process for leveraging cloud services, understanding how they are being consumed across an organization, and optimizing their use on an ongoing basis.
    • Understandably, there is not a direct incentive for platform providers to optimize this on their own and tools largely provide transparency into spend related to consumption of various services over time.
    • Hopefully, as these providers mature, we’ll see more of an integrated platform within and across cloud providers to help continuously optimize a footprint so that it provides reliability and scalability, but also without promoting over provisioning or other costs that don’t provide end customer value in the process.

Given the focus of this article is cost optimization and not cloud strategy, I’m not getting into cloud modernization, automation and platform services, containerization of workloads, or serverless computing, though arguably some of those also can provide opportunities to enable innovation, improve reliability, enable edge-based computing, and optimize value/cost as well.

 

Internally Managed / Hosted

Given how far we are into the age of cloud computing, I’m assuming that legacy environments have largely been moved into converged infrastructure.  In some organizations, this may not be the case and should be evaluated along with the potential for outsourcing the hosting and management of these environments where possible (and competitive) at a reasonable value/cost level.

One interesting anecdote is how organizations don’t tend to want to make significant investments in modernizing legacy environments, particularly those in financial services resting on mainframe or midrange computing solutions.  That being said, given that they are normally shared resources, as the burden of those costs shift (where teams selectively modernize and move off those environments) and allocations of the remaining MIPS and other hosting charges are adjusted, the priority in revisiting those strategies tends to change.  Being proactive on modernization should be a continuous, proactive process rather than a reactive one, because the resulting technology decisions can otherwise be suboptimized and turned into lift-and-shift based approaches versus true modernization or innovation opportunities (I’d consider this under the broader excellence topic of relentless innovation).

 

Licensing

The last infrastructure dimension that I’d call out in relation to licensing.  While I’ve already addressed the opportunity to promote innovation and optimize expense through rationalizing applications, data solutions, or underlying technologies individually, there are three other dimensions that are worth consideration:

  • Partner Optimization
    • Between leverage of multi-year agreements on core, strategic platforms and consolidation of tools (even in a best-of-breed environment) to a smaller set of strategic, third-party providers, there are normally opportunities to reduce the number of technology partners and optimize costs in large organizations
    • The watch item would be to ensure such consolidation efforts consider volatility in the underlying technology environment (e.g., the commitment might be too long for situations where the pace of innovation is very high) while also ensuring conformance to the component and integration architecture strategies of the organization so as not to create dependencies that would make transition of those technologies more complex in the future

 

  • Governance and Utilization
    • Where licensing costs are either consumption-based or up for renewal, having established practices for revisiting the value and usage of core technologies over time can help in optimization. This can also be important in ensuring compliance to critical contract terms where appropriate (e.g., named user scenarios, concurrent versus per-seat agreements)
    • In one example a number of years ago, we decided to investigate indirect expense coming through software licenses and uncovered nearly a million dollars of software that had been renewed on an annual basis that wasn’t being utilized by anyone. The reality is that we treated these as bespoke, fixed charges and no one was looking at them at any interval.  All we needed to do in that case was pay attention and do the homework.

 

  • Transition Planning
    • The most important of these three areas is akin to having a governance process in place.
    • With regard to transition, establishing a companion process to the software renewal cycle for critical, core technologies (i.e., those providing a critical capability or having significant associated expense). This process would involve a health check (similar to package selection, but including incumbent technologies/solutions) at a point commensurate with the window of time it would take to evaluate and replace the solution if it was no longer the best option to provide a given capability.
    • Unfortunately, depending on the level of dependency that exists for third-party solutions, it is not uncommon for organizations to lack a disciplined process to review technologies in advance of their contractual renewal period and be forced to extend their licenses because of a lack of time to do anything else.
    • The result can be that organizations deploy new technologies in parallel with ones that are no longer competitive purely because they didn’t plan in advance for those transitions to occur in an organic way

Similar to the other categories, where licensing is a substantial cost component of IT expense, the general point is to be proactive and disciplined about managing and governing it.  This is a source of overhead that is easy to overlook and that can create undue burden on the overall value/cost equation.

 

Services

I’m going to write on workforce and sourcing strategy separately, so I won’t go deeply into this topic or direct labor in this article beyond a few points in each.

In optimizing cost of third-party provided services, a few dimensions come to mind:

  • Sourcing Strategy
    • Understanding and having a deliberate mapping of primary, secondary and augmentation partners (as appropriate) for key capabilities or portfolios/solutions is the starting point for optimizing value/cost
    • Where a deliberate strategy doesn’t exist, the ability to monitor, benchmark, govern, manage, and optimize will be both complex and effective only on a limited basis
    • Effective sourcing and certain approaches to how partners are engaged can also be a key lever in both enabling rapid execution of key strategies, managing migration across legacy and modernized environments, establishing new capabilities where a talent base doesn’t currently exist internal to an organization, and in optimizing expense that may be either fragmented across multiple partners or enabled through contingency labor in ad-hoc ways, all of which can help optimize the value/cost ratio on an ongoing basis

 

  • Vendor Management
    • Worth noting that I’m using the word “vendor” here because the term is fairly well understood and standard when it comes to this process. In practice, I never use the word “vendor” in deference to “partner” as I believe the latter signals a healthy approach and mindset when it comes to working with third-parties.
    • Having worked in several consulting organizations over a number of years, it was very easy to tell which clients operated in a vendor versus a partnership mindset and the former of the two can be a disincentive to making the most of these relationships
    • That being said, organizations should have an ongoing, formalized process for reviewing key partner relationships, performance against contractual obligations, on-time delivery commitments, quality expectations, management of change, and achievement of strategic partner objectives.
    • There should also be a process in place to solicit ongoing feedback both on how to improve effectiveness and the relationship but also to understand and leverage knowledge and insights a partner has on industry and technology trends and innovation opportunities that can further increase value/cost performance over time.

 

  • Contract Management
    • Finally, having a defined, transparent, and effective process for managing contractual commitments and the associated incentives where appropriate can also be important to optimizing overall value/cost
    • It is generally true that partners don’t deliver to standards that aren’t established and governed
    • Defining service levels, quality expectations, utilizing fixed price or risk sharing models and so on and then reviewing and holding both partners and the internal organization working with those partners accountable to those standards is important in having both a disciplined operating and a disciplined delivery environment
    • There’s nothing wrong with assuming everyone will do their part when it comes to living into the terms of agreements, but there also isn’t harm in keeping an eye on those commitments and making sure that partner relationships are held to evolving standards that promote maturity, quality, and cost effectiveness over time

Similar to other categories, the level of investment in sourcing, whether through professional service firms or contingent labor, should drive the level of effort involved in understanding, governing, and optimizing it, but some level of process and discipline should be in place almost under any scenario.

 

Labor

The final dimension to optimizing value and cost is direct labor.  I’m guessing, in writing this, that it’s fairly obvious I put this category last and I did so intentionally.  It is often said that “employees are the greatest source of expense” in an organization.  Interestingly enough “people are our greatest asset” has also been said many times as well.

In the section on portfolio management, I mentioned the importance of having a workforce and sourcing strategy and understanding the relationship between the alignment of people to demand on an ongoing basis.  That is a given and should be understood and evaluated with a critical eye towards how things flex and adjust as demand fluctuates.  It is also a given and assumed that an organization focused on excellence should be managing performance on a continuing basis (including times of favorable market conditions) so as not to create organizational bloat or ineffectiveness.  Said differently, poor performance that is unmanaged in an organization drags down average productivity, has an adverse impact on quality, and ultimately a negative impact on value cost because the working capacity of an organization isn’t being applied to ongoing demand and delivery needs effectively.  Where this is allowed to continue unchecked over too long a duration, the result may be an over-correction that also can have adverse impacts on performance, which is why it should be an ongoing area of focus by comparison with an episodic one.

Beyond performance management, I believe it’s important to think of all of the expense categories before this one to be variable, which is sometimes not the case in the way they are evaluated and managed.  If non-direct labor expense is substantial, a different question to consider is the relative value of “working capacity” (i.e., “knowledge workers”) by comparison with expense consumed in other things.  Said differently, a mental model that I used with a team in the past was that “every million dollars we save in X (insert dimension or cost element here… licensing, sourcing, infrastructure, applications) is Y people we can retain to do meaningful work.

Wrapping Up

Understanding that this has been a relatively long article, but still only a high-level treatment of a number of these topics, hopefully it has been useful in calling out many of the opportunities that are available to promote excellence in operations and optimize value/cost over time.

In my experience, having been in multiple organizations that have realigned costs, it takes engaged and courageous leadership to make thoughtful changes versus expedient ones… it matters… and it’s worth the time invested to find the right balance overall.  In a perfect world, disciplined operations should be a part of the makeup of an effectively led organization on an ongoing basis, not the result of a market correction or fluctuation in demand or business priorities.

 

Excellence always matters, quality and value always matter.  The discipline it takes to create and manage that environment is worth the time it takes to do it effectively.

 

Thank you for taking the time to read the thoughts.  As with everything I write, feedback and reactions are welcome.  I hope this was worth the investment in time.

-CJG 04/09/2023

3 thoughts on “Optimizing the Value of IT

Leave a comment