Thoughts on Portfolio Management

Overview

Setting the stage

Having had multiple recent discussions related to portfolio management, I thought I’d share some thoughts relative to disciplined operations, in terms of the aforementioned subject and on the associated toolsets as well.  This is a substantial topic, but I’ll try to hit the main points and address more detailed questions as and when they arise.

In getting started, given all the buzz around GenAI, I asked ChatGPT “What are the most important dimensions of portfolio management in technology?”  What was interesting was that the response aligned with most discussions I’ve had over time, which is to say that it provided a process-oriented perspective on strategic alignment, financial management, and so on (a dozen dimensions overall), with a wonderfully summarized description of each (and it was both helpful and informative).  The curious part was that it missed the two things I believe are most important: courageous leadership and culture.

The remainder of this article will focus more on the process dimensions (I’m not going to frame it the same as ChatGPT for simplicity), but I wanted to start with a fundamental point: these things have to be about partnership and value first and process second.  If the focus becomes the process, there is generally something wrong in the partnership or the process is likely too cumbersome in how it is designed (or both).

 

Portfolio Management

Partnership

Portfolio management needs to start with a fundamental partnership and shared investment between business and technology leaders on the intended outcome.  Fortunately, or unfortunately, where the process tends to get the most focus (and part of why I’ve heard it so much in the last couple years) is in a difficult market/economy where spend management is the focus, and the intention is largely related to optimizing costs.  Broadly speaking, when times are good and businesses grow, the processes for prioritization and governance can become less rigorous in a speed-to-market mindset, the demand for IT services increases, and a significant amount of inefficiency, delivery and quality issues can arise as a result.  The reality is that discipline should always be a part of the process because it’s in the best interest of creating value (long- and short-term) for an organization.  That isn’t to suggest artificial constraints, unnecessary gates in a process, or anything to hinder speed-to-market.  Rather, the goal of portfolio management should be to have a framework in place to manage demand through delivery in a way that facilitates predictable, timely, and quality delivery and a healthy, secure, robust, and modern underlying technology footprint that creates significant business value and competitive advantage over time.  That overall objective is just as relevant during a demand surge as it is when spending is constrained.

This is where courageous leadership becomes the other critical overall dimension.  It’s never possible to do everything and do it well.  The key is to maintain the right mix of work, creating the right outcomes, at a sustainable pace, with quality.  Where technology leaders become order takers is where a significant amount of risk can be introduced that actually hurts a business over time.  The primary results being that taking on too much without thoughtful planning can result in critical resources being spread too thin, missed delivery commitments, poor quality, and substantial technical debt, all of which eventually undermine the originally intended goal of being “responsive”.  This is why partnership and mutual investment in the intended outcomes matters.  Not everything has to be “perfect” (and the concept itself doesn’t really exist in technology anyway), but the point is to make conscious choices on where to spend precious company resources to optimize the overall value created.

 

End-to-End Transparency

Shifting focus from the direction to the execution, portfolio management needs to start with visibility in three areas:

  • Demand management – the work being requested
  • Delivery monitoring – the work being executed
  • Value realization – the impact of what was delivered

In demand management, the focus should ideally be on both internal and external factors (e.g., business priorities, customer needs, competitive and industry trends), a thoughtful understanding of the short- and long-term value of the various opportunities, the requirements (internal and external) necessary to make them happen, and the desired timeframe for those results to be achieved.  From a process standpoint, notice of involvement and request for estimate (RFE) processes tend to be important (depending on the scale and structure of an organization), along with ongoing resource allocation and forecast information to evaluate these opportunities as they arise.

Delivery monitoring is important, given the dependencies that can and do exist within and across efforts in a portfolio, the associated resource needs, and the expectations they place on customers, partners, or internal stakeholders once delivered.  As and when things change, there should be awareness as to the impact of those changes on upcoming demand as well as other efforts within a managed portfolio.

Value realization is a generally underserved, but relatively important part of portfolio management, especially in spending constrained situations.  This level of discipline (at an overall level) is important for two primary reasons: first, to understand the efficacy of estimation and planning processes in the interest of future prioritization and planning and, second, to ensure investments were made effectively in the right priorities.  Where there is no “retrospective”, a lot of learnings may be being lost in the interest of continuous improvement and operational efficiency and effectiveness over time (ultimately having an adverse impact on business value created).

 

Maintaining a Balanced Portfolio

Two concepts that I believe are important to consider in how work is ultimately allocated/prioritized within an IT portfolio:

  • Portfolio allocation – the mix of work that is being executed on an ongoing basis
  • Prioritization – how work is ultimately selected and the process for doing so

A good mental model for portfolio allocation is a jigsaw puzzle.  Some pieces fit together, others don’t, and whatever pieces are selected, you ultimately are striving to have an overall picture that matches what you originally saw “on the box”.  While you also can operate in multiple areas of a puzzle at the same time, you also generally can’t focus in on all of them concurrently and expect to be efficient on the whole.

What I believe a “good” portfolio should include is four key areas (with an optional fifth):

  • Innovation – testing and experimenting in areas where you may achieve significant competitive advantage or differentiation
  • Business Projects – developing solutions that create or enable new or enhanced business capabilities
  • Modernization – using an “urban renewal” mindset to continue to maintain, simplify, rationalize, and advance your infrastructure to avoid significant end of life, technical debt, or other adverse impacts from an aging or diverse technology footprint
  • Security – continuing to leverage tools and technologies that manage the ever increasing exposure associated with cyber security threats (internal and external)
  • Compliance (where appropriate) – investing in efforts to ensure appropriate conformance and controls in regulatory environments / industries

I would argue that, regardless of the level of overall funding, these categories should always be part of an IT portfolio.  There can obviously be projects or programs that provide forward momentum in more than one category above, but where there isn’t some level of investment in the “non-business project” areas, likely there will be a significant correction needed at some point of time that could be very disruptive from a business standpoint.  It is probably also worth noting that I am not calling out a “technology projects” category above on purpose.  From my perspective, if a project doesn’t drive one of the other categories, I’d question what value it creates.  There is no value in technology for technology’s sake.

From a prioritization standpoint, I’ve seen both ends of the spectrum over the course of time: environments where there is no prioritization in place and everything with a positive business case (and even some without) are sent into execution to ones where there is an elaborate “scoring” methodology, with weights and factors and metrics organized into highly elaborate calculations that create a false sense of “rigor” in the efficacy of the process.  My point of view overall is that, with the above portfolio allocation model in place, ensuring some balance in each of the critical categories of spend, a prioritization process should include some level of metrics, with an emphasis on short- and long-term business/financial impact as well as a conscious integration of the resource commitments required to execute the effort by comparison with other alternatives.  As important as any process, however, is the discussions that should be happening from a business standpoint to ensure the engagement, partnership, and overall business value being delivered through the portfolio (the picture on the box) in the decisions made.

 

Release Management

Part of arriving at the right set of work to do also comes down to release management.  A good analogy for release management is the game Tetris.  In Tetris, you have various shaped blocks dropping continually into a grid, with the goal of rotating and aligning them to fit as cleanly with what is already on the radar as possible.  There are and always will be gaps and the fit will never be perfect, but you can certainly approach Tetris in a way that is efficient and well-aligned or in a way that is very wasteful of the overall real estate with which you have to work

This is great mental model for how project planning should occur.  If you do a good job, resources are effectively utilized, outcomes are predictable, there is little waste, and things run fairly smoothly.  If you don’t think about the process and continually inject new work into a portfolio without thoughtful planning as to dependencies and ongoing commitments, there can and likely will be significant waste, inefficiency, collateral impact, and issues in execution.

Release management comes down to two fundamental components:

  • Release strategy – the approach to how you organize and deliver major and minor changes to various stakeholder groups over time
  • Release calendar – an ongoing view of what will be delivered at various times, along with any critical “T-minus” dates and/or delivery milestones that can be part of a progress monitoring or gating process used in conjunction with delivery governance processes

From a release strategy standpoint, it is tempting in a world of product teams, DevSecOps, and CI/CD pipelines to assume everything comes down to individual product plans and their associated release schedules.  The two primary issues here are the time and effort it generally takes to deploy new technology and the associated change management impact to the end users who are expected to adopt those changes as and when they occur.  The more fragmented the planning process, the more business risk there is that ultimately end users or customers will be either under or overserved at any given point in time, where a thoughtful release strategy can help create predictable, manageable, and sustainable levels of change over time across a diverse set of stakeholders being served.

The release calendar, aside from being an overall summary of what will be delivered when and to whom, also should ideally provide transparency into other critical milestones in the major delivery efforts so that, in the event something moves off plan (which is a very normal occurrence in technology and medium to larger portfolios), the relationship to other ongoing efforts can be evaluated from a governance standpoint to determine whether any rebalancing or slotting of work is required.

 

Change Management

While I won’t spend a significant amount of time on this point, change management is often an area where I’ve seen the process managed very well and relatively poorly.  The easy part is generally managing change relative to a specific project or program and that governance often exists in my experience.  The issue that can arise is when the leadership overseeing a specific project is only taking into account the implications of change on that effort alone, and not the potential ripple effect of a schedule, scope, or financial adjustment on the rest of the portfolio, future demand, or on end users in the event that releases are being adjusted

 

On Tooling

Pivoting from processes to tools, at an overall level, I’m generally not a fan of over-engineering the infrastructure associated with portfolio management.  It is very easy for such an infrastructure to take a life of its own, become a significant administrative burden that creates little value (beyond transparency), or contain outdated and inaccurate information to the degree that the process involves too much data without underlying ownership and usage of the data obtained.

The goal is the outcome, not the tools.

To the extent that a process is being established, I’d generally want to focus on transparency (demand through delivery) and a healthy ongoing discussion of priorities in the interest of making informed decisions.  Beyond that, I’ve seen a lot of reporting that doesn’t generally result in any level of actions being taken, which I consider to be very ineffective from a leadership and operational standpoint. 

Again, if the process is meant to highlight a relationship problem, such as a dashboard being created requiring a large number of employees to capture timesheets to be rolled up, marked to various projects, all to have a management discussion to say “we’re over allocated and burning out our teams”, my question would be why all of that data and effort was required to “prove” something, whether there is actual trust and partnership, whether there are other underlying delivery performance issues, and so on.  The process and tools are there to enable effective execution and the creation of business value, not drain effort and energy that could better be applied in delivery with administrivia.

 

Wrapping Up

Overall, having spent a number of years seeing well developed and executed processes as well as less robust versions of the same, effective portfolio management comes down to value creation.  When the focus becomes about the process, the dashboard, the report, the metrics, something is amiss in my experience.  It should about informing engaged leadership, fostering partnership, enabling decisions, and creating value.  That is not to say that average utilization of critical resources (as an example) isn’t a good thing to monitor and keep in mind, but it’s what you do with that information that matters.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 07/29/2024

The Future of IT

Overview

Background

I’ve been thinking about writing this article for a while, with the premise of “what does IT look like in the future?”  In a digital economy, the role of technology in The Intelligent Enterprise will certainly continue to be creating value and competitive business advantage.  That being said, one can reasonably assume a few things that are true today for medium to large organizations will continue to be part of that reality as well, namely:

  • The technology footprint will be complex and heterogenous in its makeup. To the degree that there is a history of acquisitions, even more so
  • Cost will always be a concern, especially to the degree it exceeds value delivered (this is explored in my article on Optimizing the Value of IT)
  • Agility will be important in adopting and integrating new capabilities rapidly, especially given the rate of technology advancement only appears to be accelerating over time
  • Talent management will be complex given the variety of technologies present will be highly diverse (something I’ve started to address in my Workforce and Sourcing Strategy Overview article)

My hope is to provide some perspective in this article on where I believe things will ultimately move in technology, in the underlying makeup of the footprint itself, how we apply capabilities against it, and how to think about moving from our current reality to that environment.  Certainly, all of the five dimensions of what I outlined in my article on Creating Value Through Strategy will continue to apply at an overall strategy level (four of which are referenced in the bullet points above).

A Note on My Selfish Bias…

Before diving further into the topic at hand, I want to acknowledge that I am coming from a place where I love software development and the process surrounding it.  I taught myself to program in the third grade (in Apple Basic), got my degree in Computer Science, started as a software engineer, and taught myself Java and .Net for fun years after I stopped writing code as part of my “day job”.  I love the creative process for conceptualizing a problem, taking a blank sheet of paper (or white board), designing a solution, pulling up a keyboard, putting on some loud music, shutting out distractions, and ultimately having technology that solves that problem.  It is a very fun and rewarding thing to explore those boundaries of what’s possible and balance the creative aspects of conceptual design with the practical realities and physical constraints of technology development.

All that being said, insofar as this article is concerned, when we conceptualize the future of IT, I wanted to put a foundational position statement forward to frame where I’m going from here, which is:

Just because something is cool and I can do it, doesn’t mean I should.

That is a very difficult thing to internalize for those of us who live and breathe technology professionally.  Pride of authorship is a real thing and, if we’re to embrace the possibilities of a more capable future, we need to apply our energies in the right way to maximize the value we want to create in what we do.

The Producer/Consumer Model

Where the Challenge Exists Today

The fundamental problem I see in technology as a whole today (I realize I’m generalizing here) is that we tend to want to be good at everything, build too much, customize more than we should, and throw caution to the wind when it comes to things like standards and governance as inconveniences that slow us down in the “deliver now” environment in which we generally operate (see my article Fast and Cheap, Isn’t Good for more on this point).

Where that leaves us is bloated, heavy, expensive, and slow… and it’s not good.  For all of our good intentions, IT doesn’t always have the best reputation for understanding, articulating, or delivering value in business terms and, in quite a lot of situations I’ve seen over the years, our delivery story can be marred with issues that don’t create a lot of confidence when the next big idea comes along and we want to capitalize on the opportunity it presents.

I’m being relatively negative on purpose here, but the point is to start with the humility of acknowledging the situation that exists in a lot of medium to large IT environments, because charting a path to the future requires a willingness to accept that reality and to create sustainable change in its place.  The good news, from my experience, is there is one thing going for most IT organizations I’ve seen that can be a critical element in pivoting to where we need to be: a strong sense of ownership.  That ownership may show up as frustration in the status quo depending on the organization itself, but I’ve rarely seen an IT environment where the practitioners themselves don’t feel ownership for the solutions they build, maintain, and operate or have a latent desire to make them better.  There may be a lack of a strategy or commitment to change in many organizations, but the underlying potential to improve is there, and that’s a very good thing if capitalized upon.

Challenging the Status Quo

Pivoting to the future state has to start with a few critical questions:

  • Where does IT create value for the organization?
  • Which of those capabilities are available through commercially available solutions?
  • To what degree are “differentiated” capabilities or features truly creating value? Are they exceptions or the norm?

Using an example from the past, a delivery team was charged with solving a set of business problems that they routinely addressed through custom solutions, even though the same capabilities could be accomplished through integration of one or more commercially available technologies.  From an internal standpoint, the team promoted the idea that they had a rapid delivery process, were highly responsive to the business needs they were meant to address, etc.  The problem is that the custom approach actually cost more money to develop, maintain, and support, was considerably more difficult to scale.  Given solutions were also continually developed with a lack of standards, their ability to adopt or integrate any new technologies available on the market was non-existent.  Those situations inevitably led to new custom solutions and the costs of ownership skyrocketed over time.

This situation begs the question: if it’s possible to deliver equivalent business capability without building anything “in house”, why not do just that?

In the proverbial “buy versus build” argument, these are the reasons I believe it is valid to ultimately build a solution:

  • There is nothing commercially available that provides the capability at a reasonable cost
    • I’m referencing cost here, but it’s critical to understand the TCO implications of building and maintaining a solution over time. They are very often underestimated.
  • There is a commercially available solution that can provide the capability, but something about privacy, IP, confidentiality, security, or compliance-related concerns makes that solution infeasible in a way that contractual terms can’t address
    • I mention contracting purposefully here, because I’ve seen viable solutions eliminated from consideration over a lack of willingness to contract effectively, and that seems suboptimal by comparison with the cost of building alternative solutions instead

Ultimately, we create value in business capability enabled through technology, “who” built them doesn’t matter.

Rethinking the Model

My assertion is that we will obtain the most value and acceleration of business capabilities when we shift towards a producer/consumer model in technology as a whole.

What that suggests is that “corporate IT” largely adopts the mindset of the consumer of technologies (specifically services or components) developed by producers focused purely on building configurable, leverageable components that can be integrated in compelling ways into a connected ecosystem (or enterprise) of the future.

What corporate IT “produces” should be limited to differentiated capabilities that are not commercially available, and a limited set of foundational capabilities that will be outlined below.  By trying to produce less and thinking more as a consumer, this should shift the focus internally towards how technology can more effectively enable business capability and innovation and externally towards understanding, evaluating, and selecting from the best-of-breed capabilities in the market that help deliver on those business needs.

The implication, of course for those focused on custom development, would be to move towards those differentiated capabilities or entirely towards the producer side (in a product-focused environment), which honestly could be more satisfying than corporate IT can be for those with a strong development inclination.

The cumulative effect of these adjustments should lead to an influx of talent into the product community, an associated expansion of available advanced capabilities in the market, and an accelerated ability to eventually adopt and integrate those components in the corporate environment (assuming the right infrastructure is then in place), creating more business value than is currently possible where everyone tries to do too much and sub-optimizes their collective potential.

Learning from the Evolution of Infrastructure

The Infrastructure Journey

You don’t need to look very far back in time to remember when the role of a CTO was largely focused on managing data centers and infrastructure in an internally hosted environment.  Along the way, third parties emerged to provide hosting services and alleviate the need to be concerned with routine maintenance, patching, and upgrades.  Then converged infrastructure and the software-defined data center provided opportunities to consolidate and optimize that footprint and manage cost more effectively.  With the rapid evolution of public and private cloud offerings, the arguments for managing much of your own infrastructure beyond those related specifically to compliance or legal concerns are very limited and the trajectory of edge computing environments is still evolving fairly rapidly as specialized computing resources and appliances are developed.  The learning being: it’s not what you manage in house that matters, it’s the services you provide relative to security, availability, scalability, and performance.

Ok, so what happens when we apply this conceptual model to data and applications?  What if we were to become a consumer of services in these domains as well?  The good news is that this journey is already underway, the question is how far we should take things in the interest of optimizing the value of IT within an organization.

The Path for Data and Analytics

In the case of data, I think about this area in two primary dimensions:

  • How we store, manage, and expose data
  • How we apply capabilities to that data and consume it

In terms of storage, the shift from hosted data to cloud-based solutions is already underway in many organizations.  The key levers continue to be ensuring data quality and governance, finding ways to minimize data movement and optimize data sharing (while facilitating near real-time analytics), and establishing means to expose data in standard ways (e.g., virtualization) that enable downstream analytic capabilities and consumption methods to scale and work consistently across an enterprise.  Certainly, the cost of ingress and egress of data across environments is a key consideration, especially where SaaS/PaaS solutions are concerned.  Another opportunity continues to be the money wasted on building data lakes (beyond archival and unstructured data needs) when viable platform solutions in that space are available.  From my perspective, the less time and resources spent on moving and storing data to no business benefit, the more energy that can be applied to exposing, analyzing, and consuming that data in ways that create actual value.  Simply said, we don’t create value in how or where we store data, we create value in how consume it.

On the consumption side, having a standards-based environment with a consistent method for exposing data and enabling integration will lend itself well to tapping into the ever-expanding range of analytical tools on the market, as well as swapping out one technology for another as those tools continue to evolve and advance in their capabilities over time.  The other major pivot being to minimize the amount of “traditional” analytical reporting and business intelligence solutions to more dynamic data apps that leverage AI to inform meaningful end-user actions, whether that’s for internal or external users of systems.  Compliance-related needs aside, at an overall level, the primary goal of analytics should be informed action, not administrivia.

The Shift In Applications

The challenge in the applications environment is arbitrating the balance between monolithic (“all in”) solutions, like ERPs, and a fully distributed component-based environment that requires potentially significant management and coordination from an IT standpoint. 

Conceptually, for smaller organizations, where the core applications (like an ERP suite + CRM solution) represent the majority of the overall footprint and there aren’t a significant number of specialized applications that must interoperate with them, it likely would be appropriate and effective to standardize based on those solutions, their data model, and integration technologies.

On the other hand, the more diverse and complex the underlying footprint is for a medium- to large-size organization, there is value in looking at ways to decompose these relatively monolithic environments to provide interoperability across solutions, enable rapid integration of new capabilities into a best-of-breed ecosystem, and facilitate analytics that span multiple platforms in ways that would be difficult, costly, or impossible to do within any one or two given solutions.  What that translates to, in my mind, is an eventual decline of the monolithic ERP-centric environment to more of a service-driven ecosystem where individually configured capabilities are orchestrated through data and integration standards with components provided by various producers in the market.  That doesn’t necessarily align to the product strategies of individual companies trying to grow through complementary vertical or horizontal solutions, but I would argue those products should create value at an individual component level and be configurable such that swapping out one component of a larger ecosystem should still be feasible without having to abandon the other products in that application suite (that may individually be best-of-breed) as well.

Whether shifting from a highly insourced to a highly outsourced/consumption-based model for data and applications will be feasible remains to be seen, but there was certainly a time not that long ago when hosting a substantial portion of an organization’s infrastructure footprint in the public cloud was a cultural challenge.  Moving up the technology stack from the infrastructure layer to data and applications seems like a logical extension of that mindset, placing emphasis on capabilities provided and value delivered versus assets created over time.

Defining Critical Capabilities

Own Only What is Essential

Making an argument to shift to a consumption-oriented mindset in technology doesn’t mean there isn’t value in “owning” anything, rather it’s meant to be a call to evaluate and challenge assumptions related to where IT creates differentiated value and to apply our energies towards those things.  What can be leveraged, configured, and orchestrated, I would buy and use.  What should be built?  Capabilities that are truly unique, create competitive advantage, can’t be sourced in the market overall, and that create a unified experience for end users.  On the final point, I believe that shifting to a disaggregated applications environment could create complexity for end users in navigating end-to-end processes in intuitive ways, especially to the degree that data apps and integrated intelligence becomes a common way of working.  To that end, building end user experiences that can leverage underlying capabilities provided by third parties feels like a thoughtful balance between a largely outsourced application environment and a highly effective and productive individual consumer of technology.

Recognize Orchestration is King

Workflow and business process management is not a new concept in the integration space, but it’s been elusive (in my experience) for many years for a number of reasons.  What is clear at this point is that, with the rapid expansion in technology capabilities continuing to hit the market, our ability to synthesize a connected ecosystem that blends these unique technologies with existing core systems is critical.  The more we can do this in consistent ways, the more we shift towards a configurable and dynamic environment that is framework-driven, the more business flexibility and agility we will provide… and that translates to innovation and competitive advantage over time.  Orchestration is a critical piece of deciding which processes are critical enough that they shouldn’t be relegated to the internal workings of a platform solution or ERP, but taken in-house, mapped out, and coordinated with the intention of creating differentiated value that can be measured, evaluated, and optimized over time.  Clearly the scalability and performance of this component is critical, especially to the degree there is a significant amount of activity being managed through this infrastructure, but I believe the transparency, agility, and control afforded in this kind of environment would greatly outweigh the complexity involved in its implementation.

Put Integration in the Center

In a service-driven environment, clearly the infrastructure for integration, streaming in particular, along with enabling a publish and subscribe model for event-driven processing, will be critical for high-priority enterprise transactions.  The challenge in integration conversations in my experience tends to be defining the transactions that “matter”, in terms of facilitating interoperability and reuse, and those that are suitable for point-to-point, one off connections.  There is ultimately a cost for reuse when you try to scale, and there is discipline needed to arbitrate those decisions to ensure they are appropriate to business needs.

Reassess Your Applications/Services

With any medium to large organization, there is likely technology sprawl to be addressed, particularly if there is a material level of custom development (because component boundaries likely won’t be well architected) and acquired technology (because of the duplication it can cause in solutions and instances of solutions) in the landscape.  Another complicating factor could be the diversity of technologies and architectures in place, depending on whether or not a disciplined modernization effort exists, the level of architecture governance in place, and rate and means by which new technologies are introduced into the environment.  All of these factors call for a thoughtful portfolio strategy, to identify critical business capabilities and ensure the technology solutions meant to enable them are modern, configurable, rationalized, and integrated effectively from an enterprise perspective.

Leverage Data and Insights, Then Optimize

With analytics and insights being a critical capability to differentiated business performance, an effective data governance program with business stewardship, selecting the right core, standard data sets to enable purposeful, actionable analytics, and process performance data associated with orchestrated workflows are critical components of any future IT infrastructure.  This is not all data, it’s the subset that creates significant business value to justify the investment in making it actionable. As process performance data is gathered through the orchestration approach, analytics can be performed to look for opportunities to evolve processes, configurations, rules, and other characteristics of the environment based on key business metrics to improve performance over time.     

Monitor and Manage

With the expansion of technologies and components, internal and external to the enterprise environment, having the ability to monitor and detect issues, proactively take action, and mitigate performance, security, or availability issues will become increasingly important.  Today’s tools are too fragmented and siloed to achieve the level of holistic understanding that is needed between hosted and cloud-based environments, including internal and external security threats in the process.

Secure “Everything”

While zero trust and vulnerability management risk is expanding at a rate that exceeds an organization’s ability to mitigate it, treating security as a fundamental requirement of current and future IT environments is a given.  The development of a purposeful cyber strategy, prioritizing areas for tooling and governance effectively, and continuing to evolve and adapt that infrastructure will be core to the DNA of operating successfully in any organization.  Security is not a nice to have, it’s a requirement.

The Role of Standards and Governance

What makes the framework-driven environment of the future work is ultimately having meaningful standards and governance, particularly for data and integration, but extending into application and data architecture, along with how those environments are constructed and layered to facilitate evolution and change over time.  Excellence takes discipline and, while that may require some additional investment in cost and time during the initial and ongoing stages of delivery, it will easily pay itself off in business agility, operating cost/ cost of ownership, and risk/exposure to cyber incidents over time.

The Lending Example

Having spent time a number of years ago understanding and developing strategy in the consumer lending domain, the similarities in process between direct and indirect lending, prime and specialty / sub-prime, from simple products like credit card to more complex ones like mortgage is difficult to ignore.  That being said, it isn’t unusual for systems to exist in a fairly siloed manner, from application to booking, from document preparation, into the servicing process itself.

What’s interesting, from my perspective, is where the differentiation actually exists across these product sets: in the rules and workflow being applied across them, while the underlying functions themselves are relatively the same.  As an example, one thing that differentiates a lender is their risk management policy, not necessarily the tool they use to assess to implement their underwriting rules or scoring models per se.  Similarly, whether pulling a credit score is part of the front end of the process in something like credit card and an intermediate step in education lending, having a configurable workflow engine could enable origination across a diverse product set with essentially the same back-end capabilities and likely at a lower operating cost.

So why does it matter?  Well, to the degree that the focus shifts from developing core components that implement relatively commoditized capability to the rules and processes that enable various products to be delivered to end consumers, the speed with which products can be developed, enhanced, modified, and deployed should be significantly improved.

Ok, Sounds Great, But Now What?

It Starts with Culture

At the end of the day, even the best designed solutions come down to culture.  As I mentioned above, excellence takes discipline and, at times, patience and thoughtfulness that seems to contradict the speed with which we want to operate from a technology (and business) standpoint.  That being said, given the challenges that ultimately arise when you operate without the right standards, discipline, and governance, the outcome is well worth the associated investments.  This is why I placed courageous leadership as the first pillar in the five dimensions outlined in my article on Excellence by DesignLeadership is critical and, without it, everything else becomes much more difficult to accomplish.

Exploring the Right Operating Model

Once a strategy is established to define the desired future state and a culture to promote change and evolution is in place, looking at how to organize around managing that change is worth consideration.  I don’t necessarily believe in “all in” operating approaches, whether it is a plan/build/run, product-based orientation, or some other relatively established model.  I do believe that, given leadership and adaptability are critically needed for transformational change, looking at how the organization is aligned to maintaining and operating the legacy environment versus enabling establishment and transition to the future environment is something to explore.  As an example, rather than assuming a pure product-based orientation, which could mushroom into a bloated organization design where not all leaders are well suited to manage change effectively, I’d consider organizing around a defined set of “transformation teams” that operate in a product-oriented/iterative model, but basically take on the scope of pieces of the technology environment, re-orient, optimize, modernize, and align them to the future operating model, then transition those working assets to different leaders that maintain or manage those solutions in the interest of moving to the next set of transformation targets.  This should be done in concert with looking for ways to establish “common components” teams (where infrastructure like cloud platform enablement can be a component as well) that are driven to produce core, reusable services or assets that can be consumed in the interest of ultimately accelerating delivery and enabling wider adoption of the future operating model for IT.

Managing Transition

One of the consistent challenges with any kind of transformative change is moving between what is likely a very diverse, heterogenous environment to one that is standards-based, governed, and relatively optimized.  While it’s tempting to take on too much scope and ultimately undermine the aspirations of change, I believe there is a balance to be struck in defining and establishing some core delivery capabilities that are part of the future infrastructure, but incrementally migrating individual capabilities into that future environment over time.  This is another case where disciplined operations and disciplined delivery come into play so that changes are delivered consistently but also in a way that is sustainable and consistent with the desired future state.

Wrapping Up

While a certain level of evolution is guaranteed as part of working in technology, the primary question is whether we will define and shape that future or be continually reacting and responding to it.  My belief is that we can, through a level of thoughtful planning and strategy, influence and shape the future environment to be one that enables rapid evolution as well as accelerated integration of best-of-breed capabilities at a pace and scale that is difficult to deliver today.  Whether we’ll truly move to a full producer/consumer type environment that is service based, standardized, governed, orchestrated, fully secured, and optimized is unlikely, but falling short of excellence as an aspiration would still leave us in a considerably better place than where we are today… and it’s a journey worth making in my opinion.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/08/2024

Transforming Manufacturing

Overview

Growing Up in Manufacturing

My father ran his own business when I was growing up.  His business had two components: first, he manufactured pinion wire (steel or brass rods of various diameters with teeth from which gears are cut) and, second, as a producer of specialty gears that were used in various applications (e.g., the timing mechanism of an oil pump).  It was something he used to raise and support a large Italian family and it was pretty much a one-man show, with help from his kids as needed, whether that was counting and quality testing gears with mating parts or cutting, packing, and shipping material to various customers across North America.  He acquired and learned how to operate screw machines to produce pinion wire but eventually shifted to a distribution business, where he would buy finished material in quantity and then distribute to middle market customers at lower volumes at a markup. 

His business was as low tech as you could get, with a little card file he maintained that had every order by customer written out on an index card, tracking the specific item/part, volume, and pricing so he had a way to understand history as new requests for quotes and orders came in and also understand purchase patterns over time.  It was largely a relationship business, and he took his customer commitments to heart at a level that dinner conversation could easily deflect into a worry about a disruption in his supply chain (e.g., something not making it to the electroplater on time) and whether he might miss his promised delivery date.  Integrity and accountability were things that mattered and it was very clear his customers knew it.  He had a note pad on which he’d jot things down to keep some subset of information on customer / prospect follow ups, active orders, pending quotes, and so on, but to say there was a system outside what he kept in his head would be unfair to his mental capacity, which was substantial.

It was a highly manual business and, as a person who taught myself how to write software in the third grade, I was always curious what he could do to make things a little easier, more structured, and less manual, even though that was an inescapable part of running his business on the whole.  That isn’t to say he had that issue or concern, given he’d developed a system and way of operating over many years, knew exactly how it worked, and was entirely comfortable with it.  There was also a small point of my father being relatively stubborn, but that didn’t necessarily deter me from suggesting things could be improved.  I do like a challenge, after all… 

As it happened, when I was in high school, we got our first home computer (somewhere in the mid-1980s) and, despite its relatively limited capacity, I thought it would be a good idea to figure out a way to make something about running his business a little easier with technology.  To that end, I wrote a piece of software that would take all of his order management off of the paper index cards and put it into an application.  The ability to look up customer history, enter new orders, look at pricing differences across customers, etc. were all things I figured would make life a lot easier, not mention reducing the need to maintain this overstuffed card file that seemed highly inefficient to me.

By this point, I suspect it’s clear to the reader what happened… which is that, while my father appreciated the good intentions and concept, there was no interest in changing the way he’d been doing business for decades in favor of using technology he found a lot more confusing and intimidating than what he already knew (and that worked to his level of satisfaction).  I ended up relatively disappointed, but learned a valuable lesson, which is that, the first challenge in transformation is changing mindsets… without that, the best vision in the world will fail, no matter what value it may create.

It Starts with Mindset

I wanted to start this article with the above story because, despite nearly forty years having passed since I tried to introduce a little bit of automation to my father’s business, manufacturing in today’s environment can be just as antiquated and resistant to change as it was then, seeing technology as an afterthought, a bolt on, or cost of doing business rather than the means to unlock the potential that exists to transform a digital business in even the most “low tech” of operating environments.

While there is an inevitable and essential dependence on people, equipment, and processes, my belief is that we have a long way to go on understanding the critical role technology plays in unlocking the potential of all of those things to optimize capacity, improve quality, ensure safety, and increase performance in a production setting.

The Criticality of Discipline

Having spent a number of years understanding various approaches to digital manufacturing, one point that I wanted to raise prior to going into more of the particulars is the importance of operating with a holistic vision and striking the balance between agility and long-term value creation.  As I addressed in my article Fast and Cheap Isn’t Good, too much speed without quality can lead to complexity, uncontrolled and inflated TCO, and an inability to integrate and scale digital capabilities over time.  Wanting something “right now” isn’t an excuse not to do things the right way and eventually there is a price to pay for tactical thinking when solutions don’t scale or produce more than incremental gains. 

This is also related to “Framework-Driven Design” that I talk about in my article on Excellence by Design.  It is rarely the case that there is an opportunity to start from scratch in modernizing a manufacturing facility, but I do believe there is substantial value in making sure that investments are guided by an overall operating concept, technology strategy, and evolving standards that will, over time, transform the manufacturing environment as a whole and unlock a level of value that isn’t possible where incremental gains are always the goal.  Sustainable change takes time.

The remainder of this article will focus on a set of areas that I believe form the core of the future digital manufacturing environment.  Given this is a substantial topic, I will focus on the breadth of the subject versus going too deep into any one area.  Those can be follow-up articles as appropriate over time.

 

Leveraging Data Effectively

The Criticality of Standards

It is a foregone conclusion that you can’t optimize what you can’t track, measure, and analyze in real-time.  To that end, starting with data and standards is critical in transforming to a digital manufacturing environment.  Without standards, the ability to benchmark, correlate, and analyze performance will be severely compromised.  This can be a basic as how a camera system, autonomous vehicle, drone, conveyor, or digital sensor is integrated within a facility, to the representation of equipment hierarchies, or how operator roles and processes are tracked across a set of similar facilities.  Where standards for these things don’t exist, value will be constrained to a set of individual point solutions, use cases, and one-off successes.  Where standards are, however, implemented and scaled over time, the value opportunity will eventually cross over into exponential gains that aren’t otherwise possible, because the technical debt associated with retrofitting and mapping across various standards in place will create a significant maintenance effort that limits focus on true innovation and optimization.  This isn’t to suggest that there is a one-size-fits-all way to thinking about standards and that every solution needs to conform for the sake of an ivory tower ideal.  The point is that it’s worth slowing down the pace of “progress” at times to understand the value in designing solutions for longer-term value creation.

The Role of Data Governance

It’s impossible to discuss the criticality of standards without also highlighting the need for active, ongoing data governance, both to ensure standards are followed, that data quality at the local and enterprise level is given priority (especially to the degree that analytical insights and AI become core to informed decision making), and also to help identify and surface additional areas of opportunity where standards may be needed to create further insights and value across the operating environment.  The upshot of this is that there need to be established roles and accountability for data stewards at the facility and enterprise level if there is an aspiration to drive excellence in manufacturing, no matter what the present level of automation is across facilities.

 

Modeling Distributed Operations

Applying Distributed Computing

There is a power in distributed computing that enables you to scale execution at a rate that is beyond the capacity you can achieve with a single machine (or processor).  The model requires an overall coordinator of activity to distribute work and monitor execution and then the individual processors to churn out calculations as rapidly as they are able.  As you increase processors, you increase capacity, so long as the orchestrator can continue to manage and coordinate the parallel activity effectively.

From a manufacturing standpoint, the concept applies well across a set of distributed facilities, where the overall goal is to optimize the performance and utilization of available capacity given varying demand signals, individual operating characteristics of each facility, cost considerations, preventative maintenance windows, etc.  It’s a system that can be measured, analyzed, and optimized, with data gathered and measured locally, a subset of which is used to inform and guide the macro-level process.

Striking the Balance

While I will dive into this a little further towards the tail end of this article, the overall premise from an operating standpoint is to have a model that optimizes the coordination of activity between individual operating units (facilities) that are running as autonomously as possible at peak efficiency, while distributing work across them in a way that maximizes production, availability, cost, or whatever other business parameters are most critical. 

The key point being that the technology infrastructure for distributing and enabling production across and within facilities should ideally be a matter of business parameters that can be input and adjusted at the macro-level and the entire system of facilities be adjusted in real-time in a seamless, integrated way.  Conversely, the system should be a closed loop where a disruption at the facility level can inform a change across the overall ecosystem such that workloads are redistributed (if possible) to minimize the impact on overall production.  This could be manifest in one or more micro-level events (e.g., a higher than expected occurrence of unplanned outages) that informs production scheduling and distribution of orders to a major event (e.g., a fire or substantial facility outage) that redirects work across other facilities to minimize end customer impact.  Arguably there are elements that exist within ERP systems that can account for some of this today, but the level and degree of customization required to make it a robust and inclusive process would be substantial, given much of the data required to inform the model exists outside the ERP ecosystem itself, in equipment, devices, processes, and execution within individual facilities themselves.

Thinking about Mergers, Acquisitions, and Divestitures

As I mentioned in the previous section on data, establishing standards is critical to enabling a distributed paradigm for operations, the benefit of which is also the speed at which an acquisition could be leveraged effectively in concert with an existing set of facilities.  This assumes there is an ability to translate and integrate systems rapidly to make the new facility function as a logical extension of what is already in place, but ultimately a number of those technology-related challenges would have to be worked through in the interest of optimizing individual facility performance regardless.  The alternative to having this macro-level dynamic ecosystem functioning would likely be excess cost, inefficiency, and wasted production capacity.

 

Advancing the Digital Facility

The Role of the Digital Facility

At a time when data and analytics can inform meaningful action in real-time, the starting point for optimizing performance is the individual “processor”, which is a digital facility.  While the historical mental model would focus on IT and OT systems and integrating them in a secure way, the emergence of digital equipment, sensors, devices, and connected workers has led to more complex infrastructure and an exponential amount of available data that needs to be thoughtfully integrated to maximize the value it can contribute over time.  With this increased reliance on technology, likely some of which runs locally and some in the cloud, the reliability of wired and wireless connectivity has also become a critical imperative of operating and competing as a digital manufacturer.

Thinking About Auto Maintenance

Drawing on a consumer example, I brought my car in for maintenance recently.  The first thing the dealer did was plug in and download a set of diagnostic information that was gathered over the course of my road trips over the last year and a half.  The data was collected passively, provided the technicians with input on how various engine components were performing, and also some insight on settings that I could adjust given my driving habits that would enable the car to perform better (e.g., be more fuel efficient).  These diagnostics and safety systems are part of having a modern car and we take them for granted.

Turning back to a manufacturing facility, a similar mental model should apply for managing data at a local and enterprise level, which is that there should be a passive flow of data to a central repository that is mapped to processes, equipment, and operators in a way that enables ongoing analytics to help troubleshoot problems, identify optimization and maintenance opportunities, and look across facilities for efficiencies that could be leveraged at broader scale.

Building Smarter Equipment

Taking things a step further… what if I were to attach a sensor under the hood of my car, take the data, build a model, and try to make driving decisions using that model and my existing dashboard as input?  The concept seems a little ridiculous given the systems already in place within a car to help make the driving experience safe and efficient.  That being said, in a manufacturing facility with legacy equipment, that intelligence isn’t always built in, and the role of analytics can become an informed guessing game of how a piece of equipment is functioning without the benefit of the knowledge of the people who built the equipment to begin with. 

Ultimately, the goal should be for the intelligence to be embedded within the equipment itself, to enable a level of self-healing or alerting, and then within control systems to look at operating conditions across a connected ecosystem to determine appropriate interventions as they occur, whether that be a minor adjustment to operating parameters or a level of preventative maintenance.

The Role of Edge Computing and Facility Data

The desire to optimize performance and safety at the individual facility level means that decisions need to be informed and actions taken in near real-time as much as possible.  This premise then suggests that facility data management and edge computing will continue to increase in criticality as more advanced uses of AI become part of everyday integrated work processes and facility operations.

 

Enabling Operators with Intelligence

The Knowledge Challenge

With the general labor shortage in the market and retirement of experienced, skilled laborers, managing knowledge and accelerating productivity is a major issue to be addressed in manufacturing facilities.  There are a number of challenges associated with this situation, not the least of which can be safety related depending on the nature of the manufacturing environment itself.  Beyond that, the longer it takes to make an operator productive in relation to their average tenure (something that statistics would suggest is continually shrinking over time), the effectiveness of the average worker can become a limiting factor in the operating performance of a facility overall.

Understanding Operator Overload

One way that things have gotten worse is the proliferation of systems that comes with “modernizing” the manufacturing environment itself.  When confronted with the ever-expanding set of control, IT, ERP, and analytical systems, all of which can be sending alerts and requesting action (to varying degrees of criticality) on a relatively continuous basis, the pressure being created on individual operators and supervisors in a facility has increased substantially (with the availability of exponential amounts of data itself).  This is further complicated in situations where an individual “wears multiple hats” in terms of fulfilling multiple roles/personas within a given facility and arbitrating which actions to take against that increased number of demands can be considerably more complex.

Why Digital Experience Matters

While the number of applications that are part of an operating environment may not be something that is easy to reduce or simplify without significant investment (and time to make change happen), it is possible to look at things like digital experience platforms (DXPs) as a means to manage multiple applications into a single, integrated experience, inclusive of AR/VR/XR technologies as appropriate.  Organizing around an individual operator’s responsibilities can help reduce confusion, eliminate duplicated data entry, improve data quality, and ultimately improve productivity, safety, and effectiveness by extension.

The Role of the Intelligent Agent

With a foundation in place to organize and present relevant information and actions to an operator on a real-time basis, the next level of opportunity comes with the integration of intelligent agents (AI-enabled tools) into a digital worker platform to inform meaningful and guided actions that will ultimate create the most production and safety impact on an ongoing basis.  Again, there is a significant dependency on edge computing, wireless infrastructure, facility data, mobile devices, a delivery mechanism (the DXP mentioned above), and a sound underlying technology strategy to enable this at scale, but it is ultimately where AI tools can have a major impact in manufacturing moving forward.

 

Optimizing Performance through Orchestration

Why Orchestration Matters

Orchestration itself isn’t a new concept in manufacturing from my perspective, as the legacy versions of it are likely inherent in control and MES systems themselves.  The challenge occurs when you want to scale that concept out to include digital equipment, digital workers, digital devices, control systems, and connected applications into one, seamless, integrated end-to-end process.  Orchestration provides the means to establish configurable and dynamic workflow and associated rules into how you operate and optimize performance within and across facilities in a digital enterprise.

While this is definitely a capability that would need to be developed and extended over time, the concept is to think of the manufacturing ecosystem as a seamless collaboration of operators and equipment to ultimately drive efficient and safe production of finished goods. Having established the infrastructure to coordinate and track activity, the process performance can be automatically recorded and analyzed to inform continuous improvement on an ongoing basis.

Orchestrating within the Facility

The number of uses of orchestration within a facility can be as simple as coordinating and optimizing material movement between autonomous vehicles and fork lifts within a facility, to computer vision applications for safety and quality management.  With the increasing number of connected solutions within a facility, having the means to integrate and coordinate activity between and across them offers a significant opportunity in digital manufacturing moving forward.

Orchestrating across the Enterprise

Scaling back out to the enterprise level, looking across facilities, there are opportunities to look at things like procurement of MRO supplies and optimizing inventory levels, managing and optimizing production planning across similar facilities, benchmarking and analyzing process performance and looking for improvements that can be applied across facilities in a way that will create substantially greater impact than is possible if the focus is limited to the individual facility alone.  Given that certain enterprise systems like ERPs tend to operate at largely a global versus local level, having infrastructure in place to coordinate activity across both can create visibility to improvement opportunities and thereby substantial value over time.

Coordinated Execution

Finally, to coordinate between the local and global levels of execution, a thoughtful approach to managing data and the associated analytics needs to be taken.  As was mentioned in the opening, the overall operating model is meant to leverage a configurable, distributed paradigm, so the data that is shared and analyzed within and across layers is important to calibrate as part of the evolving operating and technology strategy.

 

Wrapping Up

There is a considerable amount of complexity associated with moving between a legacy, process and equipment-oriented mindset to one that is digitally enabled and based on insight-driven, orchestrated action.  That being said, the good news is that the value that can be unlocked with a thoughtful digital strategy is substantial given we’re still on the front-end of the evolution overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 02/12/2024

Workforce and Sourcing Strategy – Overview

Overview

It’s been nearly 20 years since I first worked on workforce and sourcing strategy for IT at Allstate Insurance and I’ve had this topic in my backlog for quite some time.  Unfortunately (or fortunately depending on how you want to see it), in organizing my thoughts on the various dimensions that come into play, it became clear that this is a large landscape to cover, mostly because you can’t establish a workforce and sourcing strategy without exploring your overall IT operating model, so I’m going to break it down into a series of articles that will hopefully make sense independently as part of the whole.

Overall, this series should address a set of fundamental challenges facing IT today, namely:

  1. How to address emerging technologies in relation to an existing portfolio of IT work
    • Advances like Artificial Intelligence (AI), Cloud Computing, and Cyber Security have added complexity to the IT landscape that isn’t easy to integrate into organizations effectively without a clear strategy
  2. How to optimize value/cost in a challenging economic environment
    • Cost consciousness can drive adjustments in workforce and sourcing strategies that can have detrimental impact on operating performance if it not handled in a thoughtful manner
  3. How to leverage sourcing to drive competitive advantage and enable relentless innovation
    • Sourcing capabilities in a thoughtful manner creates organizational agility whereas an ineffective strategy can do the opposite
  4. How to monitor, govern, and manage change over time
    • Finally, even if all of the above are identified and approached effectively, technology capabilities continue to advance more rapidly than any organization can integrate it and the ability to evolve a strategy is critical if there will be sustainability over time

The remainder of this article will provide a brief overview of each of the areas I will explore in future posts.

Where You Are

Like any other strategy work, a foundation needs to be established on the current state, both in terms of internal and external capabilities.  Without transparency, managing the overall workforce and understanding the labor components of operating cost becomes extremely difficult.

Some key questions that will be addressed in this area:

  • How to organize information around the mix of skills in place across IT and where they are coming from (internally, externally)
  • How and when a competency model becomes useful for core IT roles (business analyst, PM/Scrum Master, architect, data engineer, data scientist, etc.)
  • How organizational scale and demand fluctuation affects an IT operating model
  • How organizational design influences an IT operating model
  • How to think about the engagement of third-party providers (augmentation, consultants, integrators, product/service providers)
  • How to think about the distribution of resources against an overall IT portfolio of work
  • How enterprise standards and governance play a role in a sourced IT environment

As one would expect, laying the foundation for understanding the current IT operating model is a significant step in setting the stage for transforming and optimizing it and, consequently, how the data is organized can be important.

What You Need

This is the “crystal ball” step, because it involves a blend of the tangible and intangible, the combination of what you know (in terms of current and projected business demand and your overall technology strategy) and what is based on a level of speculation (in terms of industry/competitive strategy and technology innovation/trends).  The good news is that, if you think about your workforce and sourcing strategy as a “living” thing you manage, govern, and adjust over time, the need to establish direction doesn’t need to be a cumbersome, time-consuming process, because it’s a snapshot meant to inform near-term decisions based on longer-term assumptions that can be adjusted as required. This is where a lot of strategy work breaks down in my experience: over-analyzing past the point of value being created, with a significant loss in agility being created in the process.  “Operating with Agility” (as I refer to it in my Excellence By Design article) is really about understanding how to build for resiliency and adaptability, because change is a constant reality that no “long-term planning” is ever going to address.  The goal is to build an IT operating model and associated culture that flexes and adjusts continuously, with minimal friction as changes in direction inevitably occur.

Some key questions that will be addressed in this area:

  • How to think about the relationship of internal needs and external business conditions in establishing demand for IT capabilities
  • How to align technology strategy and ongoing marketplace trends to evaluate the mix of capabilities and skills required in the near- and long-term
  • How to determine where it makes sense to retain versus source various capabilities required of IT over time
  • How to think about the level, role, and importance of standards and governance in an overall portfolio, especially where sourcing is involved

Again, the critical concept from my perspective in establishing strategy is not to ever assume it is a static, fixed thing.  I think of strategy development as establishing goals and an associated operating framework that allows you to adjust as required without requiring a disruptive level of change, because the more disruptive a change is, the more likely it will have a negative impact on your ability to deliver meaningful business value, potentially for an extended period of time.  Culture is an important component to all this as well because I’ve experienced situations over the years where, because a strategy has a significant investment associated with it (financial or otherwise), there is a reticence to evolve or adapt it.  This is predicated on the presumption that evolution is a sign of poor leadership or a lack of vision in the original articulation of an opportunity when, in fact, the ability to adapt to changing circumstances is exactly the opposite, provided the overall value to be obtained is still there.

How You Supply It

In establishing a strategy, I think of it in three basic components: what you do yourself (your workforce strategy), what you leverage partners for (your sourcing strategy), and how you purchase those products/services (your procurement strategy).  Again, my experience has been varied in this regard, where I’ve seen fairly developed and defined strategies in place and then ones that are not well defined.  The consequence of a poorly defined strategy in any of these areas translates directly into adverse performance, excess cost, lost agility, or some combination of those things based on the circumstances, governance model, and partners involved.

Some key questions that will be addressed in this area:

  • How to think about the capabilities and skills that should be retained within an organization versus those that can be supplied by external providers
  • How to structure and organize a sourcing strategy to manage cost/quality, as well as provide resiliency and agility in relation to an IT portfolio
  • How to think about captive IT scenarios by comparison with outsourcing or working in a distributed team environment
  • How contracting and procurement decisions play a role in influencing quality and sustainability

The good news is that a workforce and sourcing strategy isn’t and doesn’t need to be rocket science.  The bad news is that developing one takes a level of discipline and transparency that isn’t always easy to manage, especially in larger organizations.  I would argue that the cost efficiency and quality/service level gains easily justify any necessary investments to having them established, but the hurdle to overcome, as with many things with pursuing excellence in IT, is in having the leadership mindset to make deliberate choices at an enterprise- (rather than at a transactional/project- or program-) level.

How You Develop It

People are our greatest asset”… I’m sure at least half the readers of this article have heard that said (or seen it written) at least once if not many, many times if you’ve worked enough years and/or in multiple organizations.  Unfortunately, the number of times I’ve seen either that or a variation of it expressed where no actual commitment to employee development or associated framework to provide for learning and development was in place is why I can’t write or say the expression without cringing.  Again, the good news is that having a talent strategy isn’t complicated, the bad news is that the leadership commitment to living into one is generally the issue.

Some key questions that will be addressed in this area:

  • How to think about specialization versus core skills in the context of an IT operating model
  • How to organize an education curriculum in terms of just-in-time versus mandatory training
  • How to frame education needs in relation to ongoing delivery work
  • How to integrate employee development as part of a workforce strategy
  • How to think about demographics in the context of a workforce strategy

From my experiences of education requirements in my software development days at Price Waterhouse thirty-two years ago to the TechFluency program that was part of my team’s responsibilities (most recently) as the CTO of Georgia-Pacific, including what I experienced in various consulting and other organizations along the way, I’ve definitely seen a lot of variations in approach.  What I would say at an overall level (connecting to “Investing in Employee Development” in my Creating Value Through Strategy article) is that, however structured and defined a talent strategy is, what shows up to employees is the commitment to their development in practice.  Classes and coursework that no one is given time to take and development that isn’t supported as a core part of an organization’s culture will definitely have a negative impact on retention and operating performance over time.

Where You Want to Be

While the majority of topics (and associated articles) to this point will likely focus on concepts, structure, and operating mechanics related to people, this area is where I mostly likely will lean towards a point of view on “what good looks like” in terms of an IT operating model and how I think about it being constructed based on various scenarios/business needs.  As with anything, there is no “right answer” in how to structure, staff, and source a technology organization, the point is thinking about how to do so in a way that is thoughtful, deliberate, and is aligned to the needs of the organization it supports.

Some key questions that will be addressed in this area:

  • How to think about the mix of skills (existing and emerging) in relation to your workforce
  • How to align partners to the right work in the right way to enable innovation and agility, while optimizing value/cost
  • My point of view on cloud computing, analytics, and cyber security in relation to an IT talent pool

Again, as there is no “right” way to define an IT organization, likely the way I will approach this topic is to consider a few different scenarios in terms of organizational/business needs and then offer ways to think about staffing and sourcing IT to support and enable those needs, likely with some core elements (like courageous leadership) that are critical for establishing a culture of excellence under any circumstance.

How You Manage It

With a model being establish for how to structure and operate an IT organization, there is a need to establish the ongoing mechanism to evolve over time.  From an IT operation standpoint, this is made up of three primary component from my perspective: how you ensure you’re creating business value and desired outcomes (your IT governance processes), how you manage internal talent (your performance management process), and how you work with third-party partners (your vendor management process).  There is a nuance on the third dimension where captive entities are involved, but for the sake of simplicity at a high-level, it’s reasonable to assume that can be managed through either the second or third capability or some combination thereof, depending on the operating model.  Again, in my experience, I’ve seen and helped establish fairly robust processes for each of these things but also seen far less structured models as well.  As with any form of governance, my personal bias is to focus on how these things enable delivery and create value, not to create administrative overhead.

Some key questions that will be addressed in this area:

  • How to clarify the relationship between workforce and sourcing strategy and business metrics that matter from an IT standpoint
  • How to think about where standards and governance are important in overall IT performance in relation to a portfolio of work
  • How proactive versus reactive performance management affects IT effectiveness and value/cost from a business standpoint
  • How to structure and approach partner governance in a consistent manner that promotes overall service and product quality

In my article on Optimizing the Value of IT, I write about the continuous improvement cycle that begins with transparency and governance that ultimately inform changes that will subsequently lead to future improvement and so on.  There is little as important to establishing excellence in IT as managing the workforce and sourcing of work effectively.  That being said, since technology is an ever-evolving landscape of change, the ability to monitor and adjust the various levers influencing operating performance is equally important.  To the degree these dimensions are considered (e.g., “how are we going to measure that attribute/decision on an ongoing basis” or “how will we know that’s been successful”) when establishing the strategy itself can help promote the learning and adjustments needed to remain agile and responsive to change over time.

“Peeling the Onion”

With all of the above questions established, here is a quick preview of the next layer of the model being described above.  What has been covered in the previous section is largely a level below each of the boxes presented, but this framework represents how I’ve broken down the topic for future exploration and discussion.

Wrapping Up

Having even written this overview, it’s probably clear why workforce and sourcing strategy has been on my backlog for such a long time.  It’s a topic akin to describing the ocean.  There isn’t a way to define and explore it at a high-level that does justice to the nuances and complexities involved, particularly if you want to develop one in a disciplined way that creates the most value.  That being said, I believe having a structured approach is well worth the investment in the business value it unlocks (at the right level of cost).

How the subsequent articles will unfold is to be determined, as I haven’t written any of them yet, but I’m looking forward to seeing where the journey will lead and discussion that may ensue.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 01/17/2024

Creating Value Through Strategy

Context

One of the things that I’ve come to appreciate over the course of time is the value of what I call “actionable strategy”.  By this, I mean a blend of the conceptual and practical, a framework that can be used to set direction and organize execution without being too prescriptive, while still providing a vision and mental model for leadership and teams to understand and align on the things that matter.

Without a strategy, you can have an organization largely focused on execution, but that tends to create significant operating or technical debt and complexity over time, ultimately having an adverse impact on competitive advantage, slowing delivery, and driving significant operating cost.  Similarly, a conceptual strategy that doesn’t provide enough structure to organize and facilitate execution tends to create little impact over time as teams don’t know how to apply it in a practical sense, or it can add significant overhead and cost in the administration required to map its strategic objectives to the actual work being done across the organization (given they aren’t aligned up front or at all).  The root causes of these situations can vary, but the important point is to recognize the criticality of an actionable business-aligned technology strategy and its role in guiding execution (and thereby the value technology can create for an organization).

In reality, there are so many internal and external factors that can influence priorities in an organization over time, that one’s ability to provide continuity of direction with clear conceptual outcomes (while not being too hung up on specific “tasks”) can be important in both creating the conditions for transformation and sustainable change without having to “reset” that direction very often.  This is the essence of why framework-centric thinking is so important in my mind.  Sustainable change takes time, because it’s a mindset, a culture, and way of operating.  If a strategy is well-conceived and directionally correct, the activities and priorities within that model may change, but the ability to continue to advance the organization’s goals and create value should still exist.  Said differently: Strategies are difficult to establish and operationalize.  The less you have to do a larger-scale reset of them, the better.  It’s also far easier to adjust priorities and activities than higher-level strategies, given the time it takes (particularly in larger organizations) to establish awareness of a vision and strategy.  This is especially true if the new direction represents a departure from what has been in place for some time.

To be clear, while there is a relationship between this topic and what I covered in my article on Excellence By Design, the focus there is more on the operation and execution of IT within an organization, not so much the vision and direction of what you’d ideally like to accomplish overall.

The rest of this article will focus on the various dimensions that I believe compromise a good strategy, how I think about them, and ways that they could create measurable impact.  There is nothing particularly “IT-specific” about these categories (i.e., this is conceptually akin to ‘better, faster, cheaper’) and I would argue they could apply equally well to other areas of a business, but differ in how they translate on an operating level.

In relation to the Measures outlined in each of the sections below, a few notes for awareness:

  • I listed several potential areas to consider and explore in each section, along with some questions that come to mind with each.
  • The goal wasn’t to be exhaustive or suggest that I’d recommend tracking any or all of them on an “IT Scorecard”, rather to provide some food for thought
  • My general point of view is that it’s better to track as little as possible from an “IT reporting” standpoint, unless there is intention to leverage those metrics to drive action and decisions. My experience with IT metrics historically is that they are overreported and underleveraged (and therefore not a good use of company time and resources).  I touch on some of these concepts in the article On Project Health and Transparency

Innovate

What It Is and Why It Matters

Stealing from my article on Excellence By Design: “Relentless innovation is the notion that anything we are doing today may be irrelevant tomorrow, and therefore we should continuously improve and reinvent our capabilities to ones that create the most long-term value.

Technology is evolving at a rate faster than most organizations’ ability to adopt or integrate those capabilities effectively.  As a result, a company’s ability to leverage these advances becomes increasingly challenging over time, especially to the degree that the underlying environment isn’t architected in a manner to facilitate their integration and adoption. 

The upshot of this is that the benefits to be achieved could be marginalized as any attempts to capitalize on these innovations will likely become point solutions or one-off efforts that don’t scale or create a different form of technical debt over time.  This is very evident in areas like analytics where capabilities like GenAI and other artificial intelligence-oriented solutions are only as effective as the underlying architecture of the environment into which they are integrated.  Are wins possible that could be material from a business standpoint?  Absolutely yes.  Will it be easy to scale them if you don’t invest in foundational things to enable that?  Very likely not.

The positive side of this is that technology is in a much different place than it was ten or twenty years ago, where it can significantly improve or enhance a company’s capabilities or competitive position.  Even in the most arcane of circumstances, there likely is an opportunity for technology to fuel change and growth in a digital business environment, whether that is internal to the operations of a company, or through its interactions with customers, suppliers, or partners (or some combination thereof).

Key Dimensions to Consider

Thinking about this area, a number of dimensions came to mind:

  • Promoting Courageous Leadership
    • This begins by acknowledging that leadership is critical to setting the stage for innovation over time
    • There are countless examples of organizations that were market leaders who ultimately lost their competitive advantage due to complacency or an inability to see or respond to changing market conditions effectively
  • Fueling Competitive Advantage
    • This is about understanding how technology helps create competitive advantage for a company and focusing in on those areas rather than trying to do everything in an unstructured or broad-based way, which would likely diffuse focus, spread critical resources, and marginalize realized benefits over time
  • Investing in Disciplined Experimentation
    • This is about having a well-defined process to enable testing out new business and technology capabilities in a way that is purposeful and that creates longer-term benefits
    • The process aspect of this important as it is relatively easy to spin up a lot of “innovation and improvement” efforts without taking the time to understand and evaluate the value and implications of those activities in advance. The problem of this being that you can either end up wasting money where the return on investment isn’t significant or that you can develop concepts that can’t easily be scaled to production-level solutions, which will limit their value in practice
  • Enabling Rapid Technology Adoption
    • This dimension is about understanding the role of architecture, standards, and governance in integrating and adopting new technical capabilities over time
    • As an example, an organization with an established component (or micro-service) architecture and integration strategy should be able to test and adopt new technologies much faster than one without them. That isn’t to suggest it can’t be done, but rather that the cost and time to execute those objectives will increase as delivery becomes more of a brute force situation than one enabled by a well-architected environment
  • Establishing a Culture of Sustainability
    • Following onto the prior point, as new solutions are considered, tested, and adopted, product lifecycle considerations should come into play.
    • Specifically, as part of the introduction of something new, is it possible to replace or retire something that currently exists?
    • At some point, when new technologies and solutions are introduced in a relatively ungoverned manner, it will only be a matter of time before the cost and complexity of the technology footprint will choke an organization’s ability to continue to both leverage those investments and to introduce new capabilities rapidly.

Measuring Impact

Several ways to think about impact:

  • Competitive Advantage
    • What is a company’s absolute position relative to its competition in markets where they compete and on metrics relative to those markets?
  • Market Differentiation
    • Is innovation fueling new capabilities not offered by competitors?
    • Is the capability gap widening or narrowing over time?
    • I separated these first two points, though they are arguably flavors of the same thing, to emphasize the importance of looking at both capabilities and outcomes from a competitive standpoint. One can be doing very well from a competitive standpoint relative to a given market, but have competitors developing or extending their capabilities faster, in which case, there could be risk of the overall competitive position changing in time
  • Reduced Time to Adopt New Solutions
    • What is the average length of time between a major technology advancement (e.g., cloud computing, artificial intelligence) becoming available and an organization’s ability to perform meaningful experiments and/or deploy it in a production setting?
    • What is the ratio of investment on infrastructure in relation to new technologies meant to leverage it over time?
  • Reduced Technical Debt
    • What percentage of experiments turn into production solutions?
    • How easy is to scale those production solutions (vertically or horizontally) across an enterprise?
    • Are new innovations enabling the elimination of other legacy solutions? Are they additive and complementary or redundant at some level?

Accelerate

What It Is and Why It Matters

Take as much as time as you need, let’s make sure we do it right, no matter what.”  This is a declaration that I don’t think I’ve ever heard in nearly thirty-two years in technology.  Speed matters, “first mover advantage”, or any other label one could place upon the desire to produce value at a pace that is at or beyond an organization’s ability to integrate and assimilate all the changes.

That being said, the means to speed is not just a rush to iterative methodology.  The number of times I’ve heard or seen “Agile Transformation” (normally followed by months of training people on concepts like “Scrum meetings”, “Sprints”, and “User Stories”) posed as a silver bullet to providing disproportionate delivery results goes beyond my ability to count and it’s unfortunate.  Similarly, I’ve heard glorified versions of perpetual hackathons championed, where the delivery process involves cobbling together solutions in a “launch and learn” mindset that ultimately are poorly architected, can’t scale, aren’t repeatable, create massive amounts of technical debt, and never are remediated in production.  These are cases where things done in the interest of “speed” actually destroy value over time.

That being said, moving from monolithic to iterative (or product-centric) approaches and DevSecOps is generally a good thing to do.  Does this remedy issues in a business/IT relationship, solve for a lack of architecture, standards and governance, address an overall lack of portfolio-level prioritization, or a host of other issues that also affect operating performance and value creation over time?  Absolutely not.

The dimensions discussed in this section are meant to highlight a few areas beyond methodology that I believe contribute to delivering value at speed, and ones that are often overlooked in the interest of a “quick fix” (which changing methodology generally isn’t).

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Optimizing Portfolio Investments
    • Accelerating delivery begins by first taking a look at the overall portfolio makeup and ensuring the level of ongoing delivery is appropriate to the capabilities of the organization. This includes utilization of critical knowledge resources (e.g., planning on a named resource versus an FTE-basis), leverage of an overall release strategy, alignment of variable capacity to the right efforts, etc.
    • Said differently, when an organization tries to do too much, it tends to do a lot of things ineffectively, even under the best of circumstances. This does not help enhance speed to value at the overall level
  • Promoting Reuse, Standards, and Governance
    • This dimension is about recognizing the value that frameworks, standards and governance (along with architecture strategy) play in accelerating delivery over time, because they become assets and artifacts that can be leveraged on projects to reduce risk as well as effort
    • Where these things don’t exist, there almost certainly will be an increase in project effort (and duration) and technical debt that ultimately will slow progress on developing and integrating new solutions into the landscape
  • Facilitating Continuous Improvement
    • This dimension is about establishing an environment where learning from mistakes is encouraged and leveraged proactively on an ongoing basis to improve the efficacy of estimation, planning, execution, and deployment of solutions
    • It’s worth noting that this is as much an issue of culture as of process, because teams need to know that it is safe, expected, and appreciated to share learnings on delivery efforts if there is to be sustainable improvement over time
  • Promoting Speed to Value
    • This is about understanding the delivery process, exploring iterative approaches, ensuring scope is managed and prioritized to maximize impact, and so on
    • I’ve written separately that methodology only provides a process, not necessarily a solution to underlying cultural or delivery issues that may exist. As such, it is part of what should be examined and understood in the interest of breaking down monolithic approaches and delivering value at a reasonable pace and frequency, but it is definitely not a silver bullet.  They don’t, nor will they ever exist.
  • Establishing a Culture of Quality
    • In the proverbial “Good, Fast, or Cheap” triangle, the general assumption is that you can only choose two of the three as priorities and accept that the third will be compromised. Given that most organizations want results to be delivered quickly and don’t have unlimited financial resources, the implication is that quality will be the dimension that suffers.
    • The irony of this premise is that, where quality is compromised repeatedly on projects, the general outcome is that technical debt will be increased, maintenance effort along with it, and future delivery efforts will be hampered as a consequence of those choices
    • As a result, in any environment where speed is important, quality needs to be a significant focus so ongoing delivery can be focused as much as possible on developing new capabilities and not fixing things that were not delivered properly to begin with

Measuring Impact

Several ways to think about impact:

  • Reduced Time to Market
    • What is the average time from approval to delivery?
    • What is the percentage of user stories/use cases delivered per sprint (in an iterative model)? What level of spillover/deferral is occurring on an ongoing basis (this can be an indicator of estimation, planning, or execution-related issues)?
    • Are retrospectives part of the delivery process and valuable in terms of their learnings?
  • Increase in Leverage of Standards
    • Is there an architecture review process in place? Are standards documented, accessible, and in use?  Are findings from reviews being implemented as an outcome of the governance process?
    • What percentage of projects are establishing or leveraging reusable common components, services/APIs, etc.?
  • Increased Quality
    • Are defect injection rates trending in a positive direction?
    • What level of severity 1/2 issues are uncovered post-production in relation to those discovered in testing pre-deployment (efficacy of testing)?
    • Are criteria in place and leveraged for production deployment (whether leveraging CI/CD processes or otherwise)?
    • Is production support effort for critical solutions decreasing over time (non-maintenance related)?
  • Lower Average Project Cost
    • Is the average labor cost/effort per delivery reducing on an ongoing basis?

Optimize

What It Is and Why It Matters

Along with the pursuit of speed, it is equally important to pursue “simplicity” in today’s complex technology environment.  With so many layers now being present, from hosted to cloud-based solutions, package and custom software, internal and externally integrated SaaS and PaaS solutions, digital equipment and devices, cyber security requirements, analytics solutions, and monitoring tools… complexity is everywhere.  In large organizations, the complexity tends to be magnified for many reasons, which can create additional complexities in and across the technology footprint and organizations required to design, deliver, and support integrated solutions at scale.

My experience with optimization historically is that it tends to be too reactive of a process, and generally falls by the wayside when business conditions are favorable.  The problem with this is the bloat and inefficiency that tends to be bred in a growth environment, that ultimately reduces the value created by IT with increasing levels of spend.  That is why a purposeful approach that is part of a larger portfolio allocation strategy is important.  Things like workforce and sourcing strategy, modernization, ongoing rationalization and simplification, standardization and continuous improvement are important to offset what otherwise could lead to a massive “correction” the minute conditions change.  I would argue that, similar to performance improvement in software development, an organization should never be so cost inefficient that a massive correction is even possible.  For that to be the case, something extremely disruptive should have occurred, otherwise the discipline in delivery and operations likely wasn’t where it needed to be leading up to that adjustment.

I’ve highlighted a few dimensions that are top of mind in regard to ongoing optimization, but have written an entire article on optimizing value over cost that is a more thorough exploration of this topic if this is of interest (Optimizing the Value of IT).

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Reducing Complexity
    • There is some very simple math related to complexity in an IT environment, which is that increasing complexity drives a (sometimes disproportionate) increase in cost and time to deliver solutions, especially where there is a lack of architecture standards and governance
    • In areas like Integration and Analytics, this is particularly important, given they are both foundational and enable a significant amount of business capabilities when done well
    • It is also important to clarify that reducing complexity doesn’t necessarily equate to reducing assets (applications, data solutions, technologies, devices, integration endpoints, etc.), because it could be the case that the number of desired capabilities in an organization requires an increasing number of solutions over time. That being said, with the right integration architecture and associated standards, as an example, the ability to integrate and rationalize solutions will be significantly easier and faster than without them (which is complexity of a different kind)
  • Optimizing Ongoing Costs
    • I recently wrote an article on Optimizing the Value of IT, so I won’t cover all that material again here
    • The overall point is that there are many levers available to increase value while managing or reducing technology costs in an enterprise
    • That being said, aggregate IT spend can and may increase over time, and be entirely appropriate depending on the circumstances, as long as the value delivered increases proportionately (or in excess of that amount)
  • Continually Modernizing
    • The mental model that I’ve had for support for a number of years is to liken it to city planning and urban renewal. Modernizing a footprint is never a one-time event, it needs to be a continuous process
    • Where this tends to break down in many organizations is the “Keep the Lights On” concept, which suggests that maintenance spend should be minimized on an ongoing basis to allow the maximum amount of funding for discretionary efforts that advance new capabilities
    • The problem with this logic is that it can tend to lead to neglect of core infrastructure and solutions that then become obsolete, unsupportable, pose security risks, and that approach end of life with only very expensive and disruptive paths to upgrade or modernize them
    • It would be far easier to carve out a portion of the annual spend allocation for a thoughtful and continuous modernization where these become ongoing efforts, are less disruptive, and longer-term costs are managed more effectively at lower overall risk
  • Establishing and Maintaining a Workforce Strategy
    • I have an article in my backlog for this blog around workforce and sourcing strategy, having spent time developing both in the past, so I won’t elaborate too much on this right now other than to say it’s an important component in an organizational strategy for multiple reasons, the largest being that it enables you to flex delivery capability (up and down) to match demand while maintaining quality and a reasonable cost structure
  • Proactively Managing Performance
    • Unpopular though it is, my experience in many of the organizations in which I’ve worked over the years has been that performance management is handled on a reactive basis
    • Particularly when an organization is in a period of growth, notwithstanding extreme situations, the tendency can be to add people and neglect the performance management process with an “all hands, on deck” mentality that ultimately has a negative impact on quality, productivity, morale, and other measures that matter
    • This isn’t an argument for formula-driven processes, as I’ve worked in organizations that have forced performance curves against an employee population, and sometimes to significant, detrimental effect. My primary argument is that I’d rather have an environment with 2% involuntary annual attrition (conceptually), than one where it isn’t managed at all, market conditions change, and suddenly there is a push for a 10% reduction every three years, where competent “average” talent is caught in the crossfire.  These over-corrections cause significant disruption, have material impact on employee loyalty, productivity, and morale, and generally (in my opinion) are the result of neglecting performance management on an ongoing basis

Measuring Impact

Several ways to think about impact:

  • Increased Value/Cost Ratio
    • Is the value delivered for IT-related effort increasing in relation to cost (whether the latter is increasing, decreasing, or remaining flat)?
  • Reduced Overall Assets
    • Have the number of duplicated/functionally equivalent/redundant assets (applications, technologies, data solutions, devices, etc.) reduced over time?
  • Lower Complexity
    • Is the percentage of effort on the average delivery project spent on addressing issues related to a lack of standards, unique technologies, redundant systems, etc. reducing over time?
  • Lower Technical Debt
    • What percentage of overall IT spend is committed to addressing quality, technology, end-of-life, or non-conformant solutions (to standards) in production on an ongoing basis?

Inspire

What It Is and Why It Matters

Having written my last article on culture, I’m not going to dive deeply into the topic, but I believe the subject of employee engagement and retention (“People are our greatest asset…”) is often spoken about, but not proportionately acted on in deliberate ways.  It is far different, as an example, to tell employees their learning and development is important, but then either not provide the means for them to receive training and education or put “delivery” needs above that growth on an ongoing basis.  It’s expedient on a short-term level, but the cost to an organization in loyalty, morale, and ultimately productivity (and results) is significant.

Inspiration matters.  I fundamentally believe you achieve excellence as an organization by enrolling everyone possible in creating a differentiated and special workplace.  Having worked in environments where there was a contagious enthusiasm in what we were doing and also in ones I’d consider relatively toxic and unhealthy, there’s no doubt on the impact it has on the investment people make in doing their best work.

Following onto this, I believe there is also a distinction to be drawn in engaging the “average” employees across the organization versus targeting the “top performers”.  I have written about this previously, but top performers, while important to recognize and leverage effectively, don’t generally struggle with motivation (it’s part of what makes them top performers to begin with).  The problem is that placing a disproportionate amount of management focus on this subset of the employee population can have a significant adverse impact, because the majority of an organization is not “top performers” and that’s completely fine.  If the engagement, output, and productivity of the average employee is elevated even marginally, the net impact to organizational results should be fairly significant in most environments.

The dimensions below represent a few ways that I think about employee engagement and creating an inspired workplace.

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Becoming an Employer of Choice
    • Reputation matters. Very simple, but relevant point
    • This becomes real in how employees are treated on a cultural and day-to-day level, compensated, and managed even in the situation where they exit the company (willingly or otherwise)
    • Having worked for and with organizations that have had a “reputation” that is unflattering in certain ways, the thing I’ve come to be aware of over time is how important that quality is, not only when you work for a company, but the perception of it that then becomes attached to you afterwards
    • Two very simple questions to employees that could serve as a litmus test in this regard:
      • If you were looking for a job today, knowing what you know now, would you come work here again?
      • How likely would you be to recommend this as a place to work to a friend?
    • Promoting a Healthy Culture
      • Following onto the previous point, I recently wrote about The Criticality of Culture, so I won’t delve into the mechanics of this beyond the fact that dedicated, talented employees are critical to every organization, of any size, and the way in which they are treated and the environment in which they work is crucial to optimizing the experience for them and the results that will be obtained for the organization as a whole
    • Investing in Employee Development
      • Having worked in organizations where there was both an explicit, dedicated commitment to ongoing education and development and others where there was “never time” to invest in or “delivery commitments” that interfered with people’s learning and growth, the consequent impact on productivity and organizational performance has always been fairly obvious and very negative from my perspective
      • A healthy culture should create space for people to learn and grow their skills, particularly in technology, where the landscape is constantly changing and there is a substantial risk of skills becoming atrophied if not reinforced and evolved as things change.
      • This isn’t an argument for random training, of course, as there should be applicability for the skills into which an organization invests on behalf of its employees, but it should be an ongoing priority as much as any delivery effort so you maintain your ability to integrate new technology capabilities as and when they become available over time
    • Facilitating Collaboration
      • This and the next dimension are both discussed in the above article on culture, but the overall point is that creating a productive workplace goes beyond the individual employee to encouraging collaboration and seeking the kind of results discussed in my article on The Power of N
      • The secondary benefit from a collaborative environment is the sense of “connectedness” it creates across teams when it’s present, which would certainly help productivity and creativity/solutioning when part of a healthy, positive culture
    • Creating an Environment of Transparency
      • Understanding there are always certain things that require confidentiality or limited distribution (or both), the level of transparency in an environment helps create connection between the individual and the organization as well as helping to foster and engender trust
      • Reinforcing the criticality of communication in creating an inspiring workplace is extremely obvious, but having seen situations where the opposite is in place, it’s worth noting regardless

Measuring Impact

Several ways to think about impact:

  • Improved Productivity
    • Is more output being produced on a per FTE basis over time?
    • Are technologies like Copilot being leveraged effectively where appropriate?
  • Improved Average Utilization
    • Are utilization statistics reflecting healthy levels (i.e., not significantly over or under allocated) on an ongoing basis (assuming plan/actuals are reasonably reflected)?
  • Improved Employee Satisfaction
    • Are employee surveys trending in a positive direction in terms of job satisfaction?
  • Lower Voluntary Attrition
    • Are metrics declining in relation to voluntary attrition?

Perform

What It Is and Why It Matters

Very simply said: all the aspirations to innovate, grow, and develop capabilities don’t mean a lot if your production environment doesn’t support business and customer needs exceptionally well on a day-to-day basis.

As a former account executive and engagement manager in consulting at various organizations, any account strategy for me always began with one statement: “Deliver with quality”. If you don’t block and tackle well in your execution, the best vision and set of strategic goals will quickly be set aside until you do.  This is fundamentally about managing infrastructure, availability, performance of critical solutions, and security.  In all cases, it can be easy to operate in a reactive capacity and be very complacent about it, rather than looking for ways to improve, simplify, and drive greater stability, security, and performance over time. 

As an example, I experienced a situation where an organization spent tens of millions of dollars annually on production support, planning for things that essentially hadn’t broken yet, but had no explicit plan or spend targeted at addressing the root cause of the issues themselves.  Thankfully, we were able to reverse that situation, plan for some proactive efforts that ultimately took millions out of that spend by simply executing a couple projects.  In that case, the issue was the mindset, assuming that we had to operate in a reactive rather than proactive way, while the effort and dollars being consumed could have been better applied developing new business capabilities rather than continuing to band-aid issues we’d never addressed.

Another situation that is fairly prevalent today is the role of FinOps in managing cloud costs.  Without governance, the convenience of spinning up cloud assets and services can add considerable complexity, cost, and security exposure, all under the promise of shifting from a CapEx to OpEx environment.  The reality is that the maturity and discipline required to manage it effectively requires focus so it doesn’t become problematic over time.

There are many ways to think about managing and optimizing production, but the dimensions that come to mind as worthy of some attention are expressed below.

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Providing Reliability of Critical Solutions
    • Having worked with a client where the health of critical production solutions was in a state where that became the top IT priority, this can’t be overlooked as a critical priority in any strategy
    • It’s great to advance capabilities through ongoing delivery work, but if you can’t operate and support critical business needs on a daily level, it doesn’t matter
  • Effectively Managing Vulnerabilities
    • With the increase in complexity in managing technology environments today, internal and external to an organization, cyber exposure is growing at a rate faster than anyone can manage it fully
    • To that end, having a comprehensive security strategy, from managing external to internal threats, ransomware, etc. (from the “outside-in”) is critical to ensuring ongoing operations with minimal risk
  • Evolving Towards a “Zero Trust” Environment
    • Similar to the previous point, while the definition of “zero trust” continues to evolve, managing a conceptual “least privilege” environment (from the “inside-out”) that protects critical assets, applications, and data is an imperative in today’s complex operating environment
  • Improving Integrated Solution Performance
    • Again, with the increasing complexity and distribution of solutions in a connected enterprise (including third party suppliers, partners, and customers), the end user experience of these solutions is an important consideration that will only increase in importance
    • While there are various solutions for application performance monitoring (APM) on the market today, the need for integrated monitoring, analytics, and optimization tools will likely increase over time to help govern and manage critical solutions where performance characteristics matter
  • Developing a Culture Surrounding Security
    • Finally, in relation to managing an effective (physical and cyber) security posture, while a deliberate strategy for managing vulnerability and zero trust are the methods by which risk is managed and mitigated, equally there is a mindset that needs to be established and integrated into an organization for risk to be effectively managed
    • This dimension is meant to recognize the need to provide adequate training, review key delivery processes (along with associated roles and responsibilities), and evaluate tools and safeguards to create an environment conducive to managing security overall

Measuring Impact

Several ways to think about impact:

  • Increased Availability
    • Is the reliability of critical production solutions improving over time and within SLAs?
  • Lower Cybersecurity Exposure
    • Is a thoughtful plan for managing cyber security in place, being executed, monitored, and managed on a continuous basis?
    • Do disaster recovery and business continuity plans exist and are they being tested?
  • Improved Systems Performance
    • Are end user SLAs met for critical solutions on an ongoing basis?
  • Lower Unplanned Outages
    • Are unplanned outages or events declining over time?

Wrapping Up

Overall, the goal of this article was to share some concepts surrounding where I see the value of strategy for IT in enabling a business at an overall level.  I didn’t delve into what the makeup of the underlying technology landscape is or should be (things I discuss in articles like The Intelligent Enterprise and Perspective on Impact Driven Analytics), because the point is to think about how to create momentum at an overall level in areas that matter… innovation, speed, value/cost, productivity, and performance/reliability.

Feedback is certainly welcome… I hope this was worth the time to read it.

-CJG 12/05/2023

The Criticality of Culture

Having spent time on an extended road trip the last couple months, I had the ability to reflect on a number of things, both personal and professional.  In the professional sense: the journey I’ve been on across nearly thirty-two years and seven employers, what has worked, where I’ve had challenges, what I’ve learned, and, looking forward, what I’d like to learn and to be part of my next opportunity on the road ahead.  A critical element in that experience certainly relates to culture and the ability to both be valued and make a difference as part of an organization. 

To that end, while the backlog of topics for this blog is quite expansive already, I thought that it would be worth sharing some thoughts on culture as a critical component in setting the stage for excellence in an organization.

The remainder of this article will focus on culture at an overall level as well as a set of core values that I believe are a good starting point for what healthy workplace environment should include… As with all things on my blog, this is my point of view and I’m definitely interested in other ways of thinking about this, values that are important that I may have overlooked, or questions on what is presented… the insight gained through the dialogue, especially where culture is concerned, can be extremely valuable.

 

Looking Beyond the Language

Culture in Action

Action, Not Words. 

This simple phrase pretty well sums up how I feel about culture at an overall level.  Culture is not about what you write down or say publicly, culture is about how you behave and what you value when it matters or when no one is looking.  The latter point is akin to the question of whether you would run a red light in the middle of nowhere in the middle of the night.  Some people definitely would… and it’s the same way with culture.

Having been employed by seven organizations and worked with many clients who have had cultures of their own, I’ve seen many variations of this over the years.  From places where the culture is relegated to a tagline printed across a set of internal collateral, to sets of values or principles that are a combination of words or phrases and a contextual explanation of what they are meant to represent in practice.  I’ve seen them referenced rarely and frequently, depending on the organization.  I’ve also seen where what is said publicly is completely different than what happens privately through words, actions, or both.  And, in one case, I’ve seen where there was a wonderful alignment of what was said to what was put into practice and reinforced across the organization…

The last example was in my early days at Sapient and its core values in the late 90s.  While I feel strongly about not mentioning any specific companies in the course of my writing, this is a case where there was such a concerted effort to live into the culture that it seems appropriate to acknowledge the organizational accomplishment.  Arguably, as the company grew and went through a period of acquisitions, evolving and adapting that culture became a challenge, but it was something that went far beyond words on a page to something we, as employees, aspired to, and that’s definitely a good thing.  The fact that I remember the original and (eventually) modified core values over twenty years since I left the organization says something about the level to which we internalized them at the time.  I also remember how the phrase “In the spirit of Openness…” (one of our core values) as the opening to a sentence meant that you were about to get some direct, unfiltered feedback about something you needed to do differently, because it didn’t align to the culture or expectations of the organization as a whole.  It was brutal feedback at times, but it applied equally to everyone, from the co-CEOs to the developers, and something about that made it feel more acceptable and genuine, however it may have been delivered in the moment.  I also remember a client remarking to us in a sales meeting that, when they asked us about our culture, everyone in the team nearly jumped out of their chair or had a story to tell.  It was something we fundamentally believed in, and that energy and excitement was palpable.  It also translated into the experience we created for our clients (as part of “Client-Focused Delivery”) as well as a non-existent attrition rate we used to talk about in Chicago because we didn’t lose a single employee for the first eighteen months I was in the office which, in consulting, is nearly unheard of.

So, with that as the benchmark on the positive side, suffice is to say that I’ve seen other organizations show up differently, and with varying levels of impact on morale, performance, attrition, and other things that matter from a business perspective.  The point is consistency in words and actions, because values become the pillars upon which an organization establishes the foundation for operating performance.  They are the rules of the road and, when they aren’t followed consistently, employee experience and ultimately business performance will suffer.

 

Culture in Applicability

All animals are equal, but some animals are more equal than others. (Animal Farm, George Orwell)

In concert with culture being action-oriented within an organization, the concept that it apply equally is also critical to giving it credibility at a broader level, which is why I thought of the Orwell quote in this regard. 

To the degree that there is a different set of rules that apply to “leadership” from the remainder of an organization, it can cause a ripple effect whereby people develop an “us and them” mentality and, by extension, a negative perception and lack of trust in senior leaders, regardless of the messaging they hear in public forums.  That perception can extend to strong performers not wanting to contribute at a stand out level for fear of becoming part of that environment, which clearly can and would hinder overall organizational results over time.  I do believe the standards for behavior and expectations of senior leaders should be higher as a consequence of their increased responsibilities to an organization, but with regard to the subject of this article, that would translate into being more true to the culture as its principal advocates.

 

The Foundation for a Healthy Environment

So, if given a blank sheet of paper, below are the core values I would start with (as part of a leadership discussion) in the interest of trying to establish a healthy and productive workplace.

 

Integrity

Culture has to start with an intention to do the right thing, promote honesty, and discourage passive aggressive behaviors.  This applies to how business matters are handled internally and externally, with high ethical standards that are fair but don’t waiver.  This is probably the most challenging core value to establish consistently in an organization in my experience, which is why I put it first on the list.

 

Respect

This core value is about treating everyone fairly and consistently, regardless of their background and experience, ensuring their voice is heard, and that inclusive diversity is part of the workplace, including its representation in leadership.

 

Transparency

Transparency is critical in establishing an environment of trust, free of unspoken agendas, promoting an environment where employees can seek understanding, ask questions, and engage in dialogue surrounding critical decisions and actions in the interest of advancing the organization overall.  People with nothing to hide, hide nothing… and, notwithstanding situations that require confidentiality for business reasons, my experience of people who are not open with their intentions and actions has generally not been very positive.

 

Collaboration

An environment that promotes respect also should recognize that there is power in collaboration that extends the capabilities of an organization far more than a group of “individual contributors” working in silos ever could (something I discuss in The Power of N).  This also implies a degree of humility within and across an organization, as the idea an individual or team is “better than” others in some form or fashion can create an environment that excludes people or ideas in a way that ultimately hinders growth and evolution.

 

Leadership

This core value can be somewhat of a catchall given its implications are fairly broad.  I will fall back on the Sapient definition of “getting a group of people from where they are to where they need to be” as a characterization, but the overall point is to accept, drive, and encourage innovation, change, and evolution as critical to business success.

 

Impact

Finally, I believe it’s important to focus on value creation, in whatever way that manifests itself.  Results ultimately matter and should be part of how a culture and set of behaviors across an organization are established and evaluated on an ongoing basis.

 

Wrapping Up

I know there are many dimensions to establishing a healthy and thriving culture beyond what I’ve covered here, but minimally I wanted to share some concepts in the interest of stirring discussion on something that is critical to establishing operating agility and performance.

Above anything else, one thing is definitely true: culture can change in a negative direction quickly, particularly with poor and/or inexperienced leadership, but to make culture healthy and thrive to the extent it becomes a sustainable part of an organization takes a lot of time and reinforcement because of the level of behavioral change involved.

Some questions to consider:

  • Are the core values in an organization well established and defined in a manner such that they can be applied to people’s work on a daily basis?
  • Do senior leaders live by those values at a level equal or greater to that which they expect from the average employee?
  • Is the culture strong enough and understood to create value at a level that the average employee would advocate it as a reason to join or work with the organization to a third-party?
  • To what extent is “culture” a reason articulated as a reason people stay or exit an organization?

 

I hope the ideas were worth considering.  Thanks for taking the time to read them.  Feedback is welcome as always.

-CJG 10/13/2023

Perspective on Impact-Driven Analytics

Overview

I’ve spent a reasonable amount of time in recent years considering data strategy and how to architect an enterprise environment responsive and resilient to change.  What’s complicated matters is the many dimensions to establishing a comprehensive data strategy and the pace with which technologies and solutions have and continue to be introduced, none of which appears to be slowing down… quite the opposite.  At the same time, the focus on “data centricity” and organizations’ desire to make the most of the insights embedded within and across their enterprise systems has created a substantial pull to drive experimentation and create new solutions aimed at monetizing those insights for competitive advantage.  With the recent advent of Generative AI and large language models, the fervor surrounding analytics has only garnered more attention as to the potential it may create, not all to a favorable end.

The problem with the situation is that, not unlike many other technology “gold rush” situations that have occurred over the last thirty-one years I’ve been working in the industry, the lack of structure and discipline (or an overall framework) to guide execution can lead to a different form of technical debt, suboptimized outcomes, and complexity that ultimately doesn’t scale to the enterprise.  Hopefully this article will unpack the analytics environment and provide a way to think about the various capabilities that can be brought to bear in a more structured approach, along with the value in doing so.

Ultimately, analytics value is created in insight-driven, orchestrated actions taken, not on presentment or publication of data itself.

 

Drawing a “Real World” Comparison

The anecdotal hallmark of “traditional business intelligence” is the dashboard, which in many cases, reflects a visual representation of data contained in one or more underlying system, meant to increase end user awareness of the state of affairs, whatever the particular business need may be (this is a topic I’ve peripherally addressed in my On Project Health and Transparency article).

Having leased a new car last summer, I was both impressed and overwhelmed by the level of sophistication available to me through the various displays in the vehicle.  The capabilities have come a long way from a bunch of dials on the dashboard with a couple lights to indicate warnings.  That being said, there was a simplicity and accessibility to that design.  You knew the operating condition of the vehicle (speed, fuel, engine temp, etc.), were warned about conditions you could address (add oil, washer fluid), and situations where expert assistance might be needed (the proverbial “check engine” light).

What impressed me about the current experience design was the level of configurability involved, what I want to see on each of the displays, from advanced operating information, to warnings (exceeding the speed limit, not that this ever happens…), to suggestions related to optimizing engine performance and fuel efficiency based on analytics run over the course of a road trip.

This isn’t very different than the analytics environment available to the average enterprise, the choices are seemingly endless, and they can be quite overwhelming if not managed in some way.  The question of modeling the right experience comes down to this: starting with the questions/desired outcome, then working backwards in terms of capabilities and data that need to be brought to bear to address those needs.  Historical analytics can feel like it becomes a “data- or source-forward” mental model, when the ideal environment should be defined from the “outcome-backwards”, where the ultimate solution is rooted in a problem (or use case) meant to be solved.

 

Where Things Break Down

As I stated in the opening, the analytics landscape has gotten extremely complex in recent years and seemingly at an increasing pace.  What this can do, as is somewhat the case with large language models and Generative AI right now, is create a lot of excitement over the latest technology or solution without a sense of how something can be used or scaled within and across an enterprise.  I liken this to a rush to the “cool” versus the “useful”, and it becomes a challenge the minute it becomes a distraction from underlying realities of the analytics environment.

Those realities are:

  • Business ownership and data stewardship are critical to identifying the right opportunities and unlocking the value to be derived from analytics. Technology is normally NOT the underlying issue in having an effective data strategy, though disciplined delivery can obviously be a challenge depending on the capabilities of the organization
  • Not all data is created equal, and it’s important to discriminate in what data is accessed, moved, stored, curated, governed… because there is a business and technology cost for doing so
  • Technologies and enabling capabilities WILL change, so the way they are integrated and orchestrated is critically important to leveraging them effectively over time
  • It is easy to develop solutions that solve a specific need or use case but not to scale and integrate them as enterprise-level solutions. In an Intelligent Enterprise, this is where orders of magnitude in value and longer-term competitive advantage is created, across digitally connected ecosystems (including those with partners), starting with effective master data management and extending to newer capabilities that will be discussed below

At an overall level, while it’s relatively easy to create high-level conceptual diagrams or point solutions in relation to data and analytics, it takes discipline and thought to architect an environment that will produce value and agility at scale… that is part of what this article is intended to address.

 

Thoughts on “Data Centricity”

Given there is value to be unlocked through an effective data strategy, “data centricity” has become fairly common language as an anchor point in discussion.  While I feel that calling attention to opportunity areas can be healthy and productive, there is also a risk that concepts without substance (the antithesis of what I refer to as “actionable strategies”) can become more of a distraction than a facilitator of progress and evolution.  A similar situation arguably exists with “zero trust” right now, but that’s a topic worthy of its own article at a future date.

In the case of being “data centric”, the number of ways the language can be translated has seemed problematic to me, largely because I fundamentally believe data is only valuable to the extent it drives a meaningful action or business outcome. To that end, I would much rather be “insight-centric” or “value-focused”, “action-oriented”, or some other phrase that leans towards what we are doing with the data we acquire and analyze, not the fact that we have it, can access, store, or display it.  Those things may be part of the underlying means to an end, but they aren’t the goal in itself, and place emphasis on the road versus the destination of a journey.

To the extent that “data centricity” drives a conversation on what data a business has that may, if accessed and understood, create value, fuel innovation, and provide competitive advantage, I believe there is value in pursuing it, but a robust and thoughtful data strategy requires end-to-end thinking at a deeper level than a catch phrase or tagline on its own.

 

What “Good” Looks Like

I would submit that there are two fundamental aspects of having a robust data strategy once you address business ownership and stewardship as a foundational requirement: asking the right questions, and architecting a resilient environment.

 

Asking the Right Questions

Arguably the heading is a bit misleading here, because inference-based models can suggest improvements to move from an existing to a desired state, but the point is to begin with the problem statement, opportunity, or desired outcome, and work back to the data, insights, and actions required to achieve that result.  This is a business-focused activity and is, therefore, why establishing ownership and stewardship is so critical.

“We can accomplish X if we optimize inventory across Y locations while maintaining a fulfillment window of Z”

The statement above is different than something more “traditional” in the sense of producing a dashboard that shows “inventory levels across locations”, “fulfillment times by location”, etc. that then is intended to inform someone who ultimately may make a decision independent of secondary impacts or, better yet, recommends or enables actions to keep the inventory ecosystem calibrated in a dynamic way that continuously recalibrates to changing conditions, within defined business constraints.

While the example itself may not be perfect, the point is whether we think about analytics as presentment-focused or outcome-focused.  To the degree we focus on enabling outcomes, the requirements of the environment we establish will likely be different, more dynamic, and more biased towards execution.

 

Architecting a Resilient Environment

With the goals identified, the technology challenge becomes about enabling those outcomes, but architecting an environment that can and will evolve as those needs change and as the underlying capabilities continue to advance in what we are able to do in analytics as a whole.

What that means, and the next section will explore, is having a structured and layered approach so that capabilities can be applied, removed, and evolved with minimal disruption to other aspects of the overall environment.  This is, at its essence, a modular and composable architecture that enables interoperability through standards-based interaction across the layers of the solution in a way that will accelerate delivery and innovation over time.

The benefit to designing an interoperable environment is simple: speed, cost, and value.  As I mentioned in where things tend to break down, in technology, there should always a bias towards rapid delivery.  That being said, focusing solely on speed can tend to create a substantial amount of technical debt and monolithic solutions that don’t create cumulative or enterprise-level value.  Short-term, they may produce impact, but medium- to longer-term, they make things considerably worse once they have to be maintained and supported and the costs for doing so escalate.  Where a well-designed environment can help is in creating a flywheel effect over time to accelerate delivery using common infrastructure, integration standards, and frameworks so that the distance between idea and implementation is significantly reduced.

 

Breaking Down the Environment

The following diagram represents the logical layers of an analytics environment and some of the solutions or capabilities that can exist at each tier.  While the diagram could arguably be drawn in various ways, the reason I’ve drawn it like this is to show the separation of concerns between where content and data originates and ultimately where it’s consumed, along with the layers of processing that can occur in between.

Having the separation of concerns defined and standards (and reference architecture) established, the ability to scale, integrate new solutions and capabilities over time, and retire or modernize those that don’t create the right level of value, becomes considerably easier than when analytics solutions are purpose built in an end-to-end manner.

The next section will elaborate on each the layers to provide more insight on why they are organized in this manner.

 

Consume and Engage

The “outermost” tier of the environment is the consumption layer, where all of the underlying analytics capabilities of an organization should be brought to bear.

In the interest of transforming analytics, as was previously mentioned in the context of “data centricity”, the dialogue needs to move from “What do you want to see?” in business terms to “What do you want to accomplish and how do you want that to work from an end user standpoint?”, then employing capabilities at the lower level tiers to enable that outcome and experience (both).

The latter dimension is important, because it is possible to deliver both data and insights and not enable effective action, and the goal of a modern analytics environment is to enable outcomes, not a better presentment of a traditional dashboard.  This is why I’ve explicitly called out the role of a Digital Experience Platform (DXP) or minimally an awareness of how end users are meant to consume, engage, and interact with the outcome of analytics, ideally as part of an integrated experience that enables or automates action based on the underlying goals.

As analytics continue to move from passive and static to more dynamic and near real-time solutions, the role of data apps as an integrated part of applications or a digital experience for end users (internal or external) will become critical to delivering on the value of analytics investments.

Again, the requirements at this level are defined by the business goals or outcomes to be accomplished, questions to be answered, user workflows to be enabled, etc. and NOT the technologies to be leveraged in doing so.  Leading with technologies is almost certainly a way to head down a path that will fail over time and create technical debt in the process.

At an overall level, the reason for separating consumption and thinking of it independent of anything that “feeds” it, is that, regardless of how good the data or insights produced in the analytics environment are, if the end user can’t take effective action upon what’s delivered, there will be little value created in the solution.

 

Understand and Analyze

Once the goal is established, the capabilities to be brought to bear becomes the next level of inquiry:

  • If there is a set of activities associated with this outcome that requires workflow, rules, and process automation, orchestration should be integrated into the solution
  • If defined inputs are meant to be processed against the underlying data and a result dynamically produced, this may be a case where a Generative AI engine could be leveraged
  • If natural language input is desired, a natural language processing engine should be integrated
  • If the goal is to analyze the desired state or outcome against the current environment or operating conditions and infer the appropriate actions to be taken, causal models and inference-based analytics could be integrated. This is where causal models take a step past Generative AI in their potential to create value at an enterprise level, though the “describe-ability” of the underlying operating environment would likely play a key role in the efficacy of these technologies over time
  • Finally, if the goal is simply to run data sets through “traditional” statistical models for predictive analytics purposes (as an example), AI/ML models may be leveraged in the eventual solution

Having referenced the various capabilities above there are three important points to understand in why this layer is critical and separated from the rest:

  • Any or all of these capabilities may be brought to bear, regardless of how they are consumed by an end user, and regardless of how the underlying data is sourced, managed, and exposed.
  • Integrating them in ways are standards-based will allow them to be applied as and when needed into various solutions to create considerable cumulative analytical capability at an enterprise level
  • These capabilities definitely WILL continue to evolve and advance rapidly, so thinking about them in a plug-and-play based approach will create considerable organizational agility to respond and integrate innovations as and when they emerge over time, which translates into long-term value and competitive advantage.

 

Organize and Expose

There are three main concepts I outlined in this tier of the environment:

  • Virtualization – how data is exposed and accessed from underlying internal and external solutions
  • Semantic Layer – how data is modeled for the purpose of allowing capabilities at higher tiers to analyze, process, and present information at lower levels of the model
  • Data Products – how data is packaged for the purposes of analysis and consumption

These three concepts can be implemented with one or more technologies, but the important distinction being that they offer a representation of underlying data in a logical format that enables analysis and consumption, not necessarily that they are a direct representation of the source data or content itself.

With regard to data products in particular, while there is a significant amount of attention paid to their identification and development, they represent marginal value in an overall data strategy, especially when analytical capabilities and consumption models have evolved to such a great degree.  Where data products should be a focus (as a foundational step) is where the underlying organization and management of data is in such disarray that an examination of how to restructure and clean up the environment is important to reducing the chaos that exists in the current state.  What that implies, however, is less distractions and potential technical debt by extension, but not the kind of competitive advantage that comes from advanced capabilities and enabled consumption.  The other scenario where data products create value in themselves is when they are packaged and marketed for external consumption (e.g., credit scores, financial market data).  It’s worth noting in this case, however, that the end customer is assuming the responsibility of analyzing, integrating, and consuming those products as they are not an “end” in themselves in an overall analytics value chain.

 

Manage, Structure, and Enrich

While I listed a number of different types of solutions that can comprise a “storage” layer in the analytics environment, the best-case scenario would be that it doesn’t exist at all.  Where the storage layer creates value in analytics is providing a means to map, associate, enrich, and transform data in ways that would be too time consuming or expensive to do “on the fly” for the purposes of feeding the analytics and consumption tiers of the model.  There is certainly value, for instance, in graph databases for modeling complex many-to-many relationships across data sets, marts and warehouses for dealing with structured data, and data lakes for archival, managing unstructured data, and training of analytical models, but where source data can be exposed and streamed directly to the downstream models and solutions, there will be lower complexity, cost, and latency in the overall solution.

 

Acquire and Transmit

As capabilities continue to advance and consumption models mature, the desire for near real-time analytics will almost certainly dominate the analytics environment.  To that end, leveraging event-based processing, whether through an enterprise service or event bus, will be critical.  To the degree that enterprise integration standards can be leveraged (and canonical objects, where defined), further simplification and acceleration of analytics efforts will be possible.

Given the varied capabilities across cloud platforms (AWS, Azure, and GCP), not to mention the probability that data will be distributed between enterprise systems that could be hosted in a different cloud platform than its documents (as those in Office 365), the ability to think critically about how to integrate and synthesize across platforms is also important.  Without a defined strategy for managing multi-cloud in this domain in particular, costs for egress/ingress of data could be substantial depending on the scale of the analytics environment itself, not to mention the additional complexities that would be introduced into governance and compliance efforts surrounding duplicated content across cloud providers.

 

Generate and Provide

The lowest tier of the model is the simplest to describe, given it’s where data and content originate, which can be a combination of applications, databases, digital devices and equipment, and so forth, internal and external to an organization.  Back to the original point on business ownership and stewardship of data, if the quality of data emanating from these sources isn’t managed and governed, everything downstream will bear the fruit of the poisoned tree depending on the degree of issues involved.

Given the amount of attention given to large language models and GenAI right now, I thought it was worth noting that I consider these as another form of content generation more logically associated with the other types of solutions at this tier of the analytics model.  It could be the case that generated content makes its way through all the layers as a “data set” delivered directly to a consumer in the model, but by orienting and associating it with the rest of the sources of data, we create the potential to apply other capabilities at the next tiers of processing to that generated content, and thereby could enrich, analyze, and do more interesting things with it over time.

 

Wrapping Up

As I indicated at the opening, the modern analytics environment is complex and highly adaptive, which presents a significant challenge to capturing the value and competitive advantage that is believed to be resident in an organization’s data.

That being said, through establishing the right level of business ownership, understanding the desired outcomes, and applying disciplined thinking in how an enterprise environment is designed and constructed, there can be significant and sustainable value created for an enterprise.

I hope the ideas were thought provoking.  I appreciate those taking the time to read them.

 

-CJG 07/27/2023

Optimizing the Value of IT

Overview

Given the challenging economic environment, I thought it would be a good time to revisit something that was an active part of my work for several years, namely IT cost optimization.

In the spirit of Excellence by Design, I don’t consider cost optimization to be a moment in time activity that becomes a priority on a periodic (“once every X years”) or reactive basis.  Optimizing the value/cost ratio is something that should always be a priority in the interest of having disciplined operations, maintaining organizational agility, technical relevance, and competitive advantage.

In the consulting business, this is somewhat of a given, as most clients want more value for the money they spend on an annualized basis, especially if the service is something provided over a period of time.  Complacency is the fastest path to lose a client and, consequently, there is a direct incentive to look for ways to get better at what you do or provide equivalent service at a lower cost to the degree the capability itself is already relatively optimized.

On the corporate side, however, where the longer-term ramifications of technology decisions bear out in accumulated technical debt and complexity, the choices become more complex as they are less about a project, program, or portfolio and become more focused on the technology footprint, operating model, and organizational structure as a whole.

To that end, I’ll explore various dimensions of how to think about the complexity and makeup of IT from a cost perspective along with the various levers to explore in how to optimize value/cost.  I’m being deliberate in mentioning both because it is very easy to reduce costs and have an adverse impact on service quality or agility, and that’s why thoughtful analysis is important in making informed choices on improving cost-efficiency.

Framing the Problem

Before looking at the individual dimensions, I first wanted to cover the simple mental model I’ve used for many years in terms of driving operating performance:

 

The model above is based on three connected components that feed each other in a continuous cycle:

  • Transparency
    • We can’t govern what we can’t see. The first step in driving any level of thoughtful optimization is having a fact-based understanding of what is going on
    • This isn’t about seeing or monitoring “everything”. It is about understanding the critical, minimum information that is needed to make informed decisions and then obtaining as accurate a set of data surrounding those points as possible.
  • Governance
    • With the above foundation in place, the next step is to have leadership engagement to review and understand the situation, and identify opportunities to improve.
    • This governance is a critical step in any optimization effort because, if there are not sustainable organizational or cultural changes made in the course of transforming, the likelihood of things returning to a similar condition will be relatively high.
  • Improvement
    • Once opportunities are identified, executing effectively on the various strategies becomes the focus, with the goal of achieving the outcomes defined through the governance process
    • The outcomes of this work should then be reflected in the next cycle of operating metrics and the cycle can be repeated on a continuing basis.

The process for optimizing IT costs is no different than what is expressed here: understand the situation first, then target areas of improvement, make adjustments, continue.  It’s a process, not a destination.  From here, we’ll explore the various dimensions of complexity and cost within IT, and the levers to consider in adjusting them.

 

At an Operating-Level

Before delving into the footprint itself, a couple areas to consider at an overall level are portfolio management and release strategy.

 

Portfolio management

Given that I am mid-way through writing an article on portfolio management and am also planning a separate one on workforce and sourcing strategy, I won’t explore this topic much beyond saying that having a mature portfolio management process can help influence cost-efficiency

That being said, I don’t consider ineffective portfolio management to be a root cause of IT value/cost being imbalanced.  An effective workforce and sourcing strategy that aligns variable capacity to sources of demand fluctuation (within reasonable cost constraints) should enable IT to deliver significant value even during periods of increased business demand.  That being said, a lack of effective prioritization, disciplined estimation and planning, resource planning, and sourcing strategy in combination with each other can have significant and harmful effects on cost-efficiency and, therefore, generally provide opportunities for improvement.

Some questions to consider in this area:

  • Is prioritization effective in your organization? When “priority” effort arise, are other ongoing efforts stopped or delayed to account for them or is the general trend to take on more work without recalibrating existing commitments?
  • Are estimation and planning efforts benchmarked, reviewed, analyzed and improved, so the integrity of ongoing prioritization and slotting of projects can be done effectively?
  • Is there a defined workforce and sourcing strategy to align variable capacity to fluctuating demand so that internal capacity can be reallocated effectively and sourcing scaled in a way that doesn’t disproportionately have an adverse impact on cost? Conversely, can demand decline without significant need for recalibration of internal, fixed capacity?  There is a situation I experienced where we and another part of the organization took the same level of financial adjustment, but they had to make 3x the level of staffing adjustment given we were operating under a defined sourcing strategy and the other organization wasn’t.  This is an important reason to have a workforce and sourcing strategy.
  • Is resource planning handled on an FTE (e.g., role-based) or resource-basis (e.g., named resource), or some combination thereof? What is the average utilization of “critical” resources across the organization on an ongoing basis?

Release strategy

This is an area that often seems overlooked in my experience (outside product delivery environments) as a means to both improve delivery effectiveness, manage cost, and improve overall quality.

Having a structured release strategy that accounts for major and minor releases, with defined criteria and established deployment windows, versus an arbitrary or ad-hoc approach can be a significant benefit both from an IT delivery and business continuity perspective.  Generally speaking, delivery cycles (in a non-CI/CD, DevSecOps-oriented environment) tend to consume time and energy that slows delivery progress.  The more windows that exist, the more disruption that occurs over a calendar year.  When those windows are allowed to occur on an ad-hoc basis, the complexities of integration testing, configuration management, and coordination from a project, program, and change management perspective tends to increase proportional to the number of release windows involved.  Similarly, the risk of quality issues occurring within and across a connected ecosystem increases as the process for stabilizing and testing individual solutions, integrating across solutions, and managing post-deployment production issues is spread across multiple teams in overlapping efforts.  Where standard integration patterns and reference architecture is in place to govern interactions across connected components, there are means to manage and mitigate risk, but generally speaking, it’s better and more cost-effective to manage a smaller set of larger, scheduled release windows than allow a more random or ad-hoc environment to exist at scale.

 

Applications

In the application footprint, larger organizations or those built through acquisition tend to have a fairly diverse and potentially redundant application landscape, which can lead to significant cost and complexity, both in maintaining and integrating the various systems in place.  This is also true when there is a combination of significant internally (custom) developed solutions working in concert with external SaaS solutions or software packages.

Three main levers can have a significant influence along the lines of what I discuss in The Intelligent Enterprise:

  • Ecosystem Design
    • Whether one chooses to refer to this as business architecture, domain-driven design, component architecture, or something else, the goal is to identify and govern a set of well-defined connected ecosystems that are composable, made up of modular components that provide a clear business (or technical) capability or set of services
    • This is critical enabler to both optimizing the application footprint as well as promoting interoperability and innovation over time, as new capabilities can be more rapidly integrated into a standards-based environment
    • Where complexity comes about is where custom or SaaS/package solutions are integrated in a way that blurs these component boundaries and creates functional overlaps that create technical debt, redundancy, data integrity issues, etc.

 

  • Integration strategy
    • With a set of well-defined components, the secondary goal is to leverage standard integration patterns with canonical objects to promote interoperability, simplification, and ongoing evolution of the technology footprint over time.
    • Without standards for integration, an organization’s ability to adopt new, innovative technologies will be significantly hindered over time and the leverage of those investments marginalized, because of the complexity involved in bringing those capabilities into the existing environment rapidly without having refactor or rewrite a portion of what exists to leverage them.
    • At an overall level, it is hard to argue that technologies are advancing at a rate faster than any organization’s ability to adopt and integrate them, so having a well-defined and heavily leveraged enterprise integration strategy is critical to long-term value creation and competitive advantage.

 

  • Application Rationalization
    • Finally, with defined ecosystems and standards for integration, having the courage and organizational leadership to consolidate like solutions to a smaller set of standard solutions for various connected components can be a significant way to both reduce cost and increase speed-to-value over time.
    • I deliberately focused on the organizational aspects of rationalization, because one of the most significant obstacles in technology simplification is the courageous leadership needed to “pick a direction” and handle the objections that invariably result in those tradeoff decisions being made.
    • Technology proliferation can be caused by a number of things, but organizational behaviors can certainly contribute when two largely comparable solutions exist without one of them being retired solely based on resistance to change or perceived control or ownership associated with a given solution.
    • At a capability-level, evaluating similar solutions, understanding functional differences and associating the value with those dimensions is a good starting point for simplifying what is in place. That being said, the largest challenge in application rationalization doesn’t tend to be identifying the best solution, it’s having the courage to make the decision, commit the investment, and execute on the plan given “new projects” tend to get more organizational focus and priority in many companies than cleaning up what they already have in place.  In a budget-constrained environment, the new, shiny thing tends to win in a prioritization process, which is something I’ll write about in a future article.

Overall, the larger the organization, the more opportunity may exist in the application domain, and the good news is that there are many things that can be done to simplify, standardize, rationalize, and ultimately optimize what’s in place in ways that both reduce cost and increase the agility, speed, and value that IT can deliver.

 

Data

The data landscape and associated technologies, especially when considering advanced analytics, has significantly added complexity (and likely associated cost) in the last five to ten years in particular.  With the growing demand for AI/ML, NLP, and now Generative AI-enabled solutions, the ability to integrate, manage, and expose data, from producer to ultimate consumer has taken on significant criticality.

Some concepts that are directionally important in my opinion in relation to optimizing value/cost in data and analytics enablement:

  • Managing separation of concerns
    • Similar to the application environment, thinking of the data and analytics environment (OLTP included) as a set of connected components with defined responsibilities, connected through standard integration patterns is important to reducing complexity, enabling innovation, and accelerating speed-to-value over time
    • Significant technical debt can be created where the relationship of operational data stores (ODS), analytics technologies, purpose-built solutions (e.g., graph or time series databases) master data management tools, data lakes, lake houses, virtualization tools, visualization tools, data quality tools, and so on are not integrated in clear, purposeful ways.
    • Where I see value in “data centricity” is in the way it serves as a reminder to understand the value that can be created for organizations in leveraging the knowledge embedded within their workforce and solutions
    • I also, however, believe that value will be unlocked over time through intelligent applications that leverage knowledge and insights to accelerate business decisions, drive purposeful collaboration, and enable innovation and competitive advantage. Data isn’t the outcome, it’s an enabler of those outcomes when managed effectively.

 

  • Minimizing data movement
    • The larger the landscape and number of solutions involved in moving source data from the original producer (whether it’s a connected application, device, or piece of equipment) to the end consumer (however that consumption is enabled) has a significant impact on innovation and business agility.
    • As such, concepts like data mesh / data fabric, enabling distributed sourcing of data in near-real time with minimized data movement to feed analytical solutions and/or deliver end user insights is critical in thinking through a longer-term data strategy.
    • In a perfect world, where data enrichment is not a critical requirement, the ability to virtualize, integrate, and expose data across various sources to conceptually “flatten” the layers of the analytics environment is an area where end consumer value can be increased while reducing cost typically associated with ETL, storage, and compute spread across various components of the data ecosystem
    • Concepts like zero ETL, data sharing, and virtualization are also key enablers that have promise in this regard

 

  • Limiting enabling technologies
    • As in the application domain, the more diverse and complex a data ecosystem is, the likelihood that a diverse set of overlapping technologies is in place, with overlapping or redundant capabilities.
    • At a minimum, a thoughtful process for reviewing and governing any new technology introductions, to evaluate how they complement, replace, or are potentially redundant or duplicative with solutions already in place is an important capability to have in place
    • Similarly, it is not uncommon to introduce new technologies with somewhat of a “silver bullet” mindset, without considering the implications for supporting or operating those solutions, which can increase cost and complexity, or having a deliberate plan to replace or retire other solutions that provide a similar capability in the process.
    • Simply said, technical debt accumulates over time, through a set of individually rationalized and justified, but overall suboptimized short-term decisions.
  • Rationalize, simplify, standardize
    • Finally, where defined components exist, data sourcing and movement is managed, and technologies introductions are governed, there should be an ongoing effort to modernize, simplify, and standardize what is already in place.
    • Data solutions can tend to be very “purpose-built” in their orientation to the degree that the enable a specific use case or outcome. The problem that occurs in this situation is if the desired business architecture becomes the de facto technical architecture and significant complexity is created in the process.
    • Using a parallel, smaller scale analogy, there is a reason that logical and physical data modeling are separate activities in application development (the former in traditional “business design” versus the latter being part of “technical design” in waterfall-based approaches). What makes sense from a business or logical standpoint likely won’t be optimized if architected as defined in that context (e.g., most business users don’t think intuitively in third normal form, nor should they have to).
    • Modern technologies allow for relatively cheap storage and giving thought to how the underlying physical landscape should be designed from producer to consumer is critical in both enabling insight delivery at speed, but also doing so within a managed, optimized technology environment.

Overall, similar to the application domain, there are significant opportunities to enable innovation and speed-to-value in the data and analytics domain, but a purposeful and thoughtful data strategy is the foundation for being cost-effective and creating long-term value.

 

Technologies

I’ve touched on technologies through the process of discussing optimization opportunities in both the application and data domains, but it’s important to understand the difference between technology rationalization (the tools and technologies you use to enable your IT environment) and application or data rationalization (the solutions that leverage those underlying technologies to solve business problems).

The process for technology simplification is the same as described in the other two domains, so I won’t repeat the concepts here beyond reiterating that a strong package or technology evaluation process (that considers the relationship to existing solutions in place) and governance of new technology introductions with explicit plans to replace or retire legacy equivalents and ensure organizational readiness to support the new technologies in production is critical to optimizing value/cost in this dimension.

 

Infrastructure

At an overall level, unless there is a significant compliance, competitive, privacy, or legal reason to do so, I would argue that no one should be in the infrastructure business unless it IS their business.  That may be a somewhat controversial point-of-view, but at a time when cloud and hosting providers are both established and mature, arguing the differentiated value of providing (versus managing) these capabilities within a typical IT department is a significant leap of faith in my opinion.  Internal and external customer value and innovation is created in the capabilities delivered through applications, not the infrastructure, networking, and storage underlying those solutions.  This isn’t to say these capabilities aren’t a critical enabler.  They definitely are, though, the overall organizational goal in infrastructure from my perspective should be to ensure quality of service at the right cost (through third party providers to the maximum extent possible), and then manage and govern the reliability and performance of that set of environments, focusing on continuous improvement and enabling innovation as required by consuming solutions over time.

There are a significant number of cost elements associated with infrastructure, a lot of financial allocations involved, and establishing TCO through these indirect expenses can be highly complex in most organizations.  As a result, I’ll focus on three overall categories that I consider significant and acknowledge there is normally opportunity to optimize value/cost in this domain beyond these three alone (cloud, hosted solutions, and licensing).  This is partially why working with a defined set of providers and managing and governing the process can be a way to focus on quality of service and desired service levels within established cost parameters versus taking on the challenge of operationalizing a substantial set of these capabilities internally.

Certainly, a level of core network and cyber security infrastructure is necessary and critical to an organization under any circumstances, something I will touch on in a future article on the minimum requirements to run an innovation-centric IT organization, but even in those cases, that does not imply or require that those capabilities be developed or managed internally.

 

Cloud

With the ever-expanding set of cloud-enabled capabilities, there are three critical watch items that I believe have significant impact on cost optimization over time:

  • Innovation
    • Cloud platform providers are making significant advancements in their capabilities on an annual basis, some of which can help enable innovation
    • To the extent that some of the architecture and integration principles above are leveraged, and a thoughtful, disciplined process is used to evaluate and manage introduction of new technologies over time, organizations can benefit from their leverage of cloud as a part of their infrastructure strategy

 

  • Multi-cloud Integration
    • The reality of cloud providers today is also that no one is good at everything and there is differentiated value in various services provided from each of them (GCP, Azure, AWS)
    • The challenge is how to integrate and synthesize these differentiated capabilities in a secure way without either creating significant complexity or cost in the process
    • Again, having a modular, composable architecture mindset with API- or service-based integration is critical in finding the right balance for leveraging these capabilities over time
    • Where significant complexity and cost can be created is where data egress comes into play from one cloud platform to another and, consequently, the need for such data movement should be minimized in my opinion to situations where the value of doing so (ideally without persisting the data in the target platform) greatly outweighs the cost to operate in that overall environment

 

  • FinOps Discipline
    • The promise of having managed platforms that convert traditional capex to opex is certainly an attractive argument for moving away from insourced and hosted solutions to the cloud (or a managed hosting provider for that matter). The challenge is in having a disciplined process for leveraging cloud services, understanding how they are being consumed across an organization, and optimizing their use on an ongoing basis.
    • Understandably, there is not a direct incentive for platform providers to optimize this on their own and tools largely provide transparency into spend related to consumption of various services over time.
    • Hopefully, as these providers mature, we’ll see more of an integrated platform within and across cloud providers to help continuously optimize a footprint so that it provides reliability and scalability, but also without promoting over provisioning or other costs that don’t provide end customer value in the process.

Given the focus of this article is cost optimization and not cloud strategy, I’m not getting into cloud modernization, automation and platform services, containerization of workloads, or serverless computing, though arguably some of those also can provide opportunities to enable innovation, improve reliability, enable edge-based computing, and optimize value/cost as well.

 

Internally Managed / Hosted

Given how far we are into the age of cloud computing, I’m assuming that legacy environments have largely been moved into converged infrastructure.  In some organizations, this may not be the case and should be evaluated along with the potential for outsourcing the hosting and management of these environments where possible (and competitive) at a reasonable value/cost level.

One interesting anecdote is how organizations don’t tend to want to make significant investments in modernizing legacy environments, particularly those in financial services resting on mainframe or midrange computing solutions.  That being said, given that they are normally shared resources, as the burden of those costs shift (where teams selectively modernize and move off those environments) and allocations of the remaining MIPS and other hosting charges are adjusted, the priority in revisiting those strategies tends to change.  Being proactive on modernization should be a continuous, proactive process rather than a reactive one, because the resulting technology decisions can otherwise be suboptimized and turned into lift-and-shift based approaches versus true modernization or innovation opportunities (I’d consider this under the broader excellence topic of relentless innovation).

 

Licensing

The last infrastructure dimension that I’d call out in relation to licensing.  While I’ve already addressed the opportunity to promote innovation and optimize expense through rationalizing applications, data solutions, or underlying technologies individually, there are three other dimensions that are worth consideration:

  • Partner Optimization
    • Between leverage of multi-year agreements on core, strategic platforms and consolidation of tools (even in a best-of-breed environment) to a smaller set of strategic, third-party providers, there are normally opportunities to reduce the number of technology partners and optimize costs in large organizations
    • The watch item would be to ensure such consolidation efforts consider volatility in the underlying technology environment (e.g., the commitment might be too long for situations where the pace of innovation is very high) while also ensuring conformance to the component and integration architecture strategies of the organization so as not to create dependencies that would make transition of those technologies more complex in the future

 

  • Governance and Utilization
    • Where licensing costs are either consumption-based or up for renewal, having established practices for revisiting the value and usage of core technologies over time can help in optimization. This can also be important in ensuring compliance to critical contract terms where appropriate (e.g., named user scenarios, concurrent versus per-seat agreements)
    • In one example a number of years ago, we decided to investigate indirect expense coming through software licenses and uncovered nearly a million dollars of software that had been renewed on an annual basis that wasn’t being utilized by anyone. The reality is that we treated these as bespoke, fixed charges and no one was looking at them at any interval.  All we needed to do in that case was pay attention and do the homework.

 

  • Transition Planning
    • The most important of these three areas is akin to having a governance process in place.
    • With regard to transition, establishing a companion process to the software renewal cycle for critical, core technologies (i.e., those providing a critical capability or having significant associated expense). This process would involve a health check (similar to package selection, but including incumbent technologies/solutions) at a point commensurate with the window of time it would take to evaluate and replace the solution if it was no longer the best option to provide a given capability.
    • Unfortunately, depending on the level of dependency that exists for third-party solutions, it is not uncommon for organizations to lack a disciplined process to review technologies in advance of their contractual renewal period and be forced to extend their licenses because of a lack of time to do anything else.
    • The result can be that organizations deploy new technologies in parallel with ones that are no longer competitive purely because they didn’t plan in advance for those transitions to occur in an organic way

Similar to the other categories, where licensing is a substantial cost component of IT expense, the general point is to be proactive and disciplined about managing and governing it.  This is a source of overhead that is easy to overlook and that can create undue burden on the overall value/cost equation.

 

Services

I’m going to write on workforce and sourcing strategy separately, so I won’t go deeply into this topic or direct labor in this article beyond a few points in each.

In optimizing cost of third-party provided services, a few dimensions come to mind:

  • Sourcing Strategy
    • Understanding and having a deliberate mapping of primary, secondary and augmentation partners (as appropriate) for key capabilities or portfolios/solutions is the starting point for optimizing value/cost
    • Where a deliberate strategy doesn’t exist, the ability to monitor, benchmark, govern, manage, and optimize will be both complex and effective only on a limited basis
    • Effective sourcing and certain approaches to how partners are engaged can also be a key lever in both enabling rapid execution of key strategies, managing migration across legacy and modernized environments, establishing new capabilities where a talent base doesn’t currently exist internal to an organization, and in optimizing expense that may be either fragmented across multiple partners or enabled through contingency labor in ad-hoc ways, all of which can help optimize the value/cost ratio on an ongoing basis

 

  • Vendor Management
    • Worth noting that I’m using the word “vendor” here because the term is fairly well understood and standard when it comes to this process. In practice, I never use the word “vendor” in deference to “partner” as I believe the latter signals a healthy approach and mindset when it comes to working with third-parties.
    • Having worked in several consulting organizations over a number of years, it was very easy to tell which clients operated in a vendor versus a partnership mindset and the former of the two can be a disincentive to making the most of these relationships
    • That being said, organizations should have an ongoing, formalized process for reviewing key partner relationships, performance against contractual obligations, on-time delivery commitments, quality expectations, management of change, and achievement of strategic partner objectives.
    • There should also be a process in place to solicit ongoing feedback both on how to improve effectiveness and the relationship but also to understand and leverage knowledge and insights a partner has on industry and technology trends and innovation opportunities that can further increase value/cost performance over time.

 

  • Contract Management
    • Finally, having a defined, transparent, and effective process for managing contractual commitments and the associated incentives where appropriate can also be important to optimizing overall value/cost
    • It is generally true that partners don’t deliver to standards that aren’t established and governed
    • Defining service levels, quality expectations, utilizing fixed price or risk sharing models and so on and then reviewing and holding both partners and the internal organization working with those partners accountable to those standards is important in having both a disciplined operating and a disciplined delivery environment
    • There’s nothing wrong with assuming everyone will do their part when it comes to living into the terms of agreements, but there also isn’t harm in keeping an eye on those commitments and making sure that partner relationships are held to evolving standards that promote maturity, quality, and cost effectiveness over time

Similar to other categories, the level of investment in sourcing, whether through professional service firms or contingent labor, should drive the level of effort involved in understanding, governing, and optimizing it, but some level of process and discipline should be in place almost under any scenario.

 

Labor

The final dimension to optimizing value and cost is direct labor.  I’m guessing, in writing this, that it’s fairly obvious I put this category last and I did so intentionally.  It is often said that “employees are the greatest source of expense” in an organization.  Interestingly enough “people are our greatest asset” has also been said many times as well.

In the section on portfolio management, I mentioned the importance of having a workforce and sourcing strategy and understanding the relationship between the alignment of people to demand on an ongoing basis.  That is a given and should be understood and evaluated with a critical eye towards how things flex and adjust as demand fluctuates.  It is also a given and assumed that an organization focused on excellence should be managing performance on a continuing basis (including times of favorable market conditions) so as not to create organizational bloat or ineffectiveness.  Said differently, poor performance that is unmanaged in an organization drags down average productivity, has an adverse impact on quality, and ultimately a negative impact on value cost because the working capacity of an organization isn’t being applied to ongoing demand and delivery needs effectively.  Where this is allowed to continue unchecked over too long a duration, the result may be an over-correction that also can have adverse impacts on performance, which is why it should be an ongoing area of focus by comparison with an episodic one.

Beyond performance management, I believe it’s important to think of all of the expense categories before this one to be variable, which is sometimes not the case in the way they are evaluated and managed.  If non-direct labor expense is substantial, a different question to consider is the relative value of “working capacity” (i.e., “knowledge workers”) by comparison with expense consumed in other things.  Said differently, a mental model that I used with a team in the past was that “every million dollars we save in X (insert dimension or cost element here… licensing, sourcing, infrastructure, applications) is Y people we can retain to do meaningful work.

Wrapping Up

Understanding that this has been a relatively long article, but still only a high-level treatment of a number of these topics, hopefully it has been useful in calling out many of the opportunities that are available to promote excellence in operations and optimize value/cost over time.

In my experience, having been in multiple organizations that have realigned costs, it takes engaged and courageous leadership to make thoughtful changes versus expedient ones… it matters… and it’s worth the time invested to find the right balance overall.  In a perfect world, disciplined operations should be a part of the makeup of an effectively led organization on an ongoing basis, not the result of a market correction or fluctuation in demand or business priorities.

 

Excellence always matters, quality and value always matter.  The discipline it takes to create and manage that environment is worth the time it takes to do it effectively.

 

Thank you for taking the time to read the thoughts.  As with everything I write, feedback and reactions are welcome.  I hope this was worth the investment in time.

-CJG 04/09/2023

What Music Taught Me About Business

It’s been a while since I’ve had a chance to post an article, so I thought I’d take a pause on two of them that are in process and write about the relationship I’ve seen between music and work, because it’s been the source of reflection at different points in time over the years.

To set the stage, I’ve been playing the drums for over forty years, performing various styles of music, from trio jazz and big band, to blues, R&B, fusion, rock, pop music, and probably others I’m not remembering at the moment.  At my busiest time, I played forty jobs with eight groups over the course of a year separate from my “day job”, which was a lot to handle, but very fun at the same time.  Eventually, when there wasn’t time to “play out”, I started a YouTube channel where I continue to record and share music to the extent that I can.  The point being that music has been a lifelong passion, and I’ve learned things over time that have parallels to what I’ve experienced in the work environment, which I wanted to share here.

To provide some structure, I’ll tackle this in three parts:

  • Performing and high-performance teams
  • Being present in the moment
  • Tenacity, commitment, and role of mentoring

Performing and High-Performance Teams

I’ll start with a simple point: the quality of the music you hear in a live performance is not solely about the competence of the individual musicians, it’s how they play as a group.

Having performed with many musicians over the years, one of the amazing feelings that occurs is when you are in the middle of a song, everyone is dialed in, the energy level is palpable, and it feels like anything is possible with where the music could go.  It’s particularly noticeable in settings like small group jazz, where you can change the style of the song on the fly, with the nod of your head or a subtle rhythmic cue to the other musicians, and suddenly everyone moves in a different direction.  The same is possible in other styles of music, but sometimes with less range of options.  The energy in those moments is amazing, both for the performers and the audience, in part because everyone is creating together and the experience is very fluid and dynamic as it unfolds.

There are three things that make an experience like this possible:

  • Everyone has to be engaged in what’s going on
  • Everyone has to be listening, communicating, and collaborating in an effective manner
  • The group has to be comprised of highly capable players, because the overall quality of the experience will be limited by the least effective or competent of the collaborators

It’s not difficult to see how this relates to a business setting, where high performance teams achieve greater results because of their ability to make the most of everyone’s individual contributions through effective communication and collaboration.  Where teams don’t communicate or adapt and evolve together, productivity and impact are immediately compromised.

The litmus test for the above could be asking the questions:

  • Is everyone on a team engaged?
  • Is everyone listening, contributing, and collaborating effectively (regardless of their role)?
  • Is everyone “on their game” at an individual performance level?

If any of the above isn’t a resounding “yes”, then there is probably opportunity to improve.

 

Being Present In the Moment

The second observation also relates to performing and handling mistakes.

One thing that I’ve always enjoyed about performing live is the challenge of creating an incredible experience for the audience.  That experience really comes down to the energy they feel from the performers and putting everything you have out there over the course of a show so there is nothing left to give by the time you’re done. 

What is very liberating and fun in that environment is that an audience doesn’t care what you do “as a day job” when they arrive at the venue or club where you’re performing.  As far as they are concerned, you’re “in the band” and the expectation (at some level) is that you’re going to be a professional and perform at a level that meets or exceeds their expectations.

Two things are interesting by comparison with a work environment in this regard:

  • First, it doesn’t matter as a performer what you did yesterday, last week, last month, or last year when you step on stage. The only thing the audience cares about is how you show up that night.  It’s a great mental model for the business environment, where it can be easy to become complacent, fall back on things that one did in the past and forget that we prove ourselves in the value we create each day.
  • Second, the minute you make a mistake on stage (and the more you stretch for things in the course of a performance, the more likely it will occur), you recover and you move forward. You don’t waste time looking backwards because music is experienced in the moment you create it, the moment passes, and there is a new opportunity to make something special happen.  This is something I’ve struggled with and worked on over time because, as a highly motivated person, it’s frustrating to make mistakes and that can lead to a tendency to beat yourself up over them when they happen.  Unfortunately, while there is a benefit to reflecting and learning from mistakes, the most important thing when they occur is not to let one mistake lead to another one, but rather to focus, recover, and make adjustments so the next set of things you do are solid.

On the latter point, it’s worth noting that when I record music for my channel, I try to do so using a single take every time.  I do that because it’s the most like live performance and forces a level of focus and intensity as a result.  The approach can lead to a mistake here or there in a recording, and that’s ok.  I’d rather make mistakes reaching for something difficult than do something “perfectly” that is easy by comparison.

 

Tenacity, Commitment, and the Role of Mentoring

The final portion of this article is a story in a couple stages, and it’s about dealing with adversity.

Dealing with Failure

When I arrived my freshman year at the University of Illinois, I signed up for tryouts for both the concert and jazz bands.  In the case of concert band, I knew what to expect (for the most part), given there were prepared selections, a couple solo pieces you were asked to prepare, and a level of sightreading that you were asked to do in the audition itself.  The audition was in front of a set of fourteen people, including all of the directors, and, while relatively uncomfortable, I nailed the parts pretty well, made the First Concert Band, and I was set.  Felt good… Check the box.

For jazz band, I didn’t know what to expect, other than going to sign up for a time slot at the Music building, showing up, and just doing whatever was needed.  Having played for at least two years in high school leading up to the audition, I thought I had things together and wasn’t really worried.  That was probably my first mistake: complacency.  What I didn’t consider was that, in any given year, there were between 24-32 drummers trying out for between 6-8 spots in the four jazz bands at the time, and some positions were already assumed given guys returning from the previous year (even though they were in the pool of people “auditioning”).  Generally speaking, none of the people auditioning were bad players either and it was much more competitive than I realized or prepared for.  As is probably obvious, I did the audition, didn’t necessarily mess anything up, but I also didn’t crush it either.  I didn’t make it and was massively disappointed, because I didn’t want to have to give up on playing for an entire year.

This is where the first pivot happened, which is that I decided, regardless of having failed, I wasn’t going to stop playing.  I brought my drums down to school, even though I had to squeeze them into the closet of my dorm room.  I met with the director of the second jazz band and told him that, while I didn’t make it, I wanted to form a small group and to be able to have a place to practice, and asked for his help.  In response, he not only got me access and permission to have a room in the music building where I could go to practice (myself or with others), but also he gave me the contact sheet for everyone who made the four jazz bands.  I then called everyone on the list, starting with the top band, asking if anyone was interested in getting together to play.  Eventually, I was able to cobble together a group between members of the third and fourth band, we met a number of times over the course of the year, I had a friend help shuttle me and my equipment to the music building, and I was able to keep playing despite the situation.

Separately, I also attended many of the performances of the bands over the course of the year, so I could see the level of ability of the drummers who did make the cut as well as the style and relative difficulty of the music they had to play.  In this regard, I wanted to understand what was expected so I could prepare myself effectively the following year.

The learning for me from this, many years before I experienced it in a professional setting, was that I don’t look at challenges or adversity as a limiting constraint, I see them as something to be worked around and overcome. That is ultimately about tenacity and commitment.  I could have spent that year on the sidelines, but instead I found another way to get valuable experience, play with musicians who were in the bands and build some relationships and, probably (to a degree) make an impression on one of the directors that I was willing to do whatever it took to keep playing. 

 

The Role of a Mentor

Having found a way to stay active, the other primary thing that was on my mind heading into the summer after my freshman year was doing everything I could to not fail again.

I sought out a teacher who was very well known, with an exceptional reputation in Chicago, having taught for probably forty years at the time.  He was teaching through a local community college, so I signed up to a “class” for private instruction and we scheduled two, one-hour lessons a week.  In preparation, he told me to buy two books, both of which looked like they were printed in 1935 (ok, probably more like 1952), and I immediately thought I might have made a mistake.

Despite the painful learning of the previous year, I somehow went into my first lesson thinking, “ok, I’m going to impress him, and he’ll help me figure out what went wrong last fall and fix it.”  That wasn’t how things went.  Rather than have me play anything on the drum set, he had me read from one of the books, on a practice pad, in a way that felt like I was back in third grade doing basic percussion stuff I hadn’t really thought about in a long time.  That was my first lesson: it took him less than 5 minutes to establish where I was at and rip me down to my foundation. There was no “impressing” him, there was work to do and seemingly quite a lot of it.  He then took notes in each of the books and gave me assignments to work on, the last of which was to apply patterns from the second book to the drum set, which was essentially a third activity in itself.  That lesson was on Monday.  I had until Thursday.  I left thinking “there is literally no way I’m going to be ready.”

And so, it began.  I set up my drums and a practice pad in my parents’ garage, and set out practicing two hours a day, every day.  I wanted to show him I could do it.  I got to the next lesson and nailed every one of the exercises perfectly.  He nodded his approval marked up both books again… and I left thinking “there is literally no way I’m going to be able to do that again.”  The next lesson was Monday.

I did the same thing, practiced two hours a day, nailed it all, he gave me more, and the cycle repeated.  By the end of the summer, we completed both books, a set of work on brush techniques, latin music styles, and some other things he had in his bag of tricks.  I never missed a single lesson.  I never missed a day of practice the entire summer

Overall, once I got past those first couple lessons, two things happened:

  • I didn’t want to disappoint him
  • I didn’t want to blow the streak I had going. I wanted to finish with a perfect record

Returning in the fall, I was completely in control of what I was doing and I made the fourth jazz band.  He told me I was one of the best students he ever had, which was very humbling given what an exceptional teacher he was and over so many decades and students he taught.

In retrospect, part of what really drove me was the level of respect I had for him as a teacher and mentor.  He was very direct and not always gentle in his choice of words, but his goal was discipline and excellence, and it was clear that he was only invested in making me as good as I could be if I put in the work.

The parallels to the work environment are pretty obvious here as well, which is the value of hard work itself and having a good mentor to guide you along the way.  A great coach knows how to help you address your gaps in the interest of being the best you can be, but you also have to be open and receptive to that teaching and that’s not always easy when all of us want to believe we’re fundamentally doing a “good job”.  Sometimes our greatest challenges are basic blocking and tackling issues that he made evident to me within five minutes of our first lesson.

 

Wrapping Up

I’ve said for many years that I wish I could think of things “at work” the way that I do when I perform.  In both cases, I strive for excellence, but in the case of music, I think I’ve historically been more accepting of the reality that mistakes are part of learning and getting better, probably because I don’t believe as much is “at stake” when I play versus when I work.

Hopefully some of the ideas have been worth sharing.  Thanks for taking the time to read them.  Feedback and reactions are welcome as always.

-CJG 09/29/2022

Defining a “Good Job”

In line with establishing the right environment to achieve Excellence by Design, I thought it would be worthwhile to explore the various dimensions that define a great workplace. 

In my experience, these conversations can tend to be skewed in one or two directions, but rarely seem holistic in terms of thinking through the various aspects of the employee experience.  Maintaining a healthy workplace and driving retention is ultimately about striking the right balance for an individual on terms ultimately defined by them.

I’ll cover the concept at an overall level, then address each of the dimensions in how I think of them in our current post-covid and employee-driven environment.

 

The Seven Dimensions

At a broad-level, the attributes that I believe define an employee’s experience are:

  • What you do
  • Who you work for
  • Who you work with
  • Where you work
  • What you earn
  • Culture
  • Work/life balance

In terms of maintaining a productive workplace, I believe that a motivated, engaged employee will ultimately want the majority of the above dimensions to be in line with their expectations

As a litmus test, take a look at each of the above attributes and ask whether that aspect of your current job is where it should be (in a “yes”/”no”/”sort of” context).  If “sort of” was an answer, I’d argue that should be counted as a “no”, because you’re presumably not excited, or you would have said “yes” in the first place.  If three or more of your answers are “no”, you probably aren’t satisfied with your job and would consider the prospect of a change if one arose.

While it can be the case that a single attribute (e.g., being significantly undercompensated, having serious issues with your immediate manager) can lead to dissatisfaction and (ultimately) attrition, my belief is that each of us tend to consider most of the above dimensions when we evaluate the conditions of our employment or other opportunities when they arise.

From the perspective of the employer, the key is to think through how the above dimensions are being addressed to create balance and a positive environment for the employees at an individual level.  Anecdotally, that balance translates into how someone might describe their job to a third-party, such as “I work a lot of long hours… BUT… I’m paid very well for what I do”.  In this example, while work/life balance may be difficult, compensation is indexed in a way that it makes up for the difference and puts things into balance.  Similarly, someone could say “I don’t make a lot of money… BUT… I love the people I work with and what I get to do each day.”

The key question from a leadership standpoint is whether we only consider one or two dimensions in “attracting and retaining the best talent” or if instead we are thoughtful and deliberate about considering the other mechanisms that drive true engagement.  What we do with intention turns into meaningful action… and what we leave to chance, puts employees in a potentially unhealthy situation that exposes companies to unnecessary risk of attrition (not to mention a poor reputation in the marketplace as a prospective employer).

Having laid that overall foundation, I’ll provide some additional thoughts on the things that I believe matter in each dimension.

 

What you do

Fundamental to the employee experience is the role you play, the title you hold, how well it aligns to your aspirations, and whether you derive your desired level satisfaction from it, even if that manifestation is as simple as a paycheck.

Not everyone wants to solve world hunger and that’s ok.  Aligning individual needs and capabilities to what people do every day creates the conditions for success and job satisfaction.

One simple thing that can be done from an employer’s standpoint beyond looking for the above alignment is to recognize and thank people for the work they do on an ongoing basis.  It amazes me how the easiest thing to do is say “thank you” when people do a good job, and yet how often that isn’t acknowledged.  Recognition can mean so much to the individual, to know their work is appreciated and valued, yet it is something I’ve seen lacking in nearly every organization I’ve worked over the last thirty years.  Often the reasoning given is that leaders are “too busy”, which is unfortunate, because no one should ever be so busy that a “thank you” isn’t worth the time it takes to send it.

 

Who you work for

There is an undeniable criticality to the relationship between an employee and their immediate manager, but I believe the perception of the broader leadership in the organization matters as well

Starting at the manager, the litmus test for a healthy situation could be some of the following questions:

  • Is there trust between the individual and their manager?
  • Does the employee believe their manager has their best interest at heart and is invested in them, personally and professionally?
  • Does the employee believe their manager will be an effective advocate for them in terms of compensation, advancement, exposure to other opportunities, etc.?
  • Does the employee see their manager as an enabler or as an obstacle when it comes to decision making?
  • Does the employee derive meaningful feedback and coaching that helps them learn and develop their capabilities over time?
  • Does the employee feel comfortable, supported, and recognized in their day-to-day work, especially when they take risks in the interest of pursuing innovation and stretch goals?

At an organizational level, the questions are slightly different, but influence the situation as well:

  • Does the organization recognize, appreciate, and promote individual contributions and accomplishments?
  • Does the organization promote and demonstrate a healthy and collaborative climate amidst and across its leadership?
  • Do the actions of leaders follow their words? Is there integrity and transparency overall?

Again, while the tendency is to think about the employee experience in terms of their immediate manager, how they perceive the organizational leadership as a whole matters, because it can contribute to their willingness to stay and possibly become part of that leadership team down the road.  Is that environment a desirable place for an employee to be?  If not, why would they contribute at a level that could lead them there?

 

Who you work with

The people you work with in the context of your job can take on multiple dimensions, especially when you are in a services business (like consulting), where your environment is a combination of people from your organization and the clients with whom you work on an ongoing basis.  Having worked with some highly collaborative and also some very aggressive clients over the years, those interactions can definitely have an impact on your satisfaction with what you do, particularly if those engagements are longer-term assignments.

From an “internal” standpoint, your team (for those leading others), your peers, your internal customers, and so on tend to define your daily experience.  While I consider culture separate from the immediate team, there is obviously a relationship between the two.

Regardless of the overall culture of the organization, as I wrote about in my Engaged Leadership and Setting the Tone article, our day-to-day interactions with those directly collaborating with us can be very different.

Some questions to consider in this regard:

  • Do individuals contribute in a healthy way, collaborate and partner effectively, and maintain a generally positive work environment?
  • Do people listen and are they accepting of alternate points of view?
  • Does the environment support both diversity and inclusion?
  • Is there a “we” versus a “me” mentality in place?
  • Do you trust the people with whom you’re working on an ongoing basis?
  • Can you count on the people with whom you work to deliver on their commitments, take accountability, communicate with you effectively, and help you out when you need it?

Again, there are many dimensions that come into the daily experience of an employee, and it depends on the circumstances and role in terms of what to consider in evaluating the situation.

 

Where you work

In the post-covid era, I think of location in terms of three dimensions, the physical location of where you work, whether you can work remotely, and the level of travel that is required as part of your job.

For base location, there can be various considerations that weigh in on the employee experience, assuming they physically need to go to the workplace.  Ease of access (e.g., if it’s in a congested metropolitan area), nearby access to other points of interest (e.g., something major cities offer, but smaller, rural locations generally don’t), the level and nature of commuting involved (and whether that is manageable), cost of living considerations, the safety of the area surrounding the workplace itself, etc.

Where remote work is an option, I’m strongly biased towards leaning in the direction of employee preference.  If an individual wants to be in the office, then there should be reasonable accommodation for it, but conversely, if they prefer a fully remote environment, then that should be supported as well.  In the world of technology, given that distributed teams and offshoring have been in place for decades, it’s difficult to argue that it’s impossible to be effective in an environment where people aren’t physically co-located.  Where collaboration is beneficial, certainly it is possible to bring people together in a workshop-type setting and hammer out specific things.  My belief is, however, that it’s possible to work in a largely remote setting and maintain healthy relationships so long as people are more deliberate (e.g., scheduling individual meetings to connect) than when they are physically co-located.

Finally, when it comes to travel, this is again measured on the preferences of an individual.  I’ve gone from jobs where there was little to no travel involved to one where I did the “road warrior” life and traveled thirty-three weeks in one year… and it was grueling.  That being said, I have friends who have lived on the road for many years (largely in consulting) and loved it, so empirically the impact of travel on job satisfaction depends on lot on the person and whether they enjoy it.

 

What you earn

Compensation is actually one of the easier dimensions to cover, because it’s tangible and measurable.  As an employer, you either compensate people in relation to the market value of the work they are performing, or you don’t, but the data is available and employees can do their own due diligence to ascertain whether your compensation philosophy is to be competitive or not.  With market conditions being what they are, it seems self-defeating to not be competitive in this regard, because there are always abundant opportunities out there for capable people, and not paying someone fairly seems like a very avoidable reason to lose talent.

Where I have apprehension in the discussion, both as an employee and a person who has communicated it to individuals, is when an organization approaches the conversation as “let’s educate you on how to think about total compensation”… and then presents a discussion on everything other than base pay.  Is there a person who doesn’t consider their paycheck as their effective compensation on an ongoing basis?  Conversely, is there anyone who has left a job because the primary reason was they didn’t like the choice in healthcare provider in the benefit plan or the level of a 401(k) matching contribution? The latter scenarios are certainly possible, though I doubt they represent the majority of compensation-related attrition situations.

Of course, variable compensation can and does matter from an overall perspective, as do other forms of incentives such as options, equity, and so forth.  I’ve worked in organizations and seen models that involve pretty much every permutation, including where variable compensation is formula-based (with or without performance inputs), fixed at certain thresholds, or determined on a largely subjective basis.  That being said, in a tough economy with the cost of about everything on the rise, most people aren’t going to look towards a non-guaranteed variable income component (discretionary or otherwise) to help them cover their ongoing living expenses.  Nice to have?  Absolutely.  The foundation for a sense of employee security in an organization?  Definitely not.

 

Culture

Backing up to the experience of the workplace as a whole, I separate culture from the people with whom an employee works for a reason.  In most organizations, culture is manifest in two respects: what a company suggests it is and what it actually is.

Across the seven organizations where I’ve been fortunate to work over the last thirty years, only two of them actually seemed to live into the principles or values that they expressed as part of their culture.  The implication for the other five organizations was that the actual culture was something different and, to the extent that reality was not always healthy, it had a negative impact on the desirability of the workplace overall.

The role culture can play can be both energizing and engaging to the degree it is a positive experience.  If it is the opposite, then the challenge becomes what was referenced in the team section, which is your ability to establish a “culture within the culture” that is healthier for the individual employee.  This is somewhat of a necessary evil from my perspective, because changing an overall culture within an organization is extremely challenging (if not impossible) and takes a really long time, even with the best of intentions.  In practice, having a sub-culture that is associated with a team is, at best, a short-term fix however, because ultimately most teams need to partner and collaborate with others outside their individual unit and unhealthy behaviors and communication in a culture at large will eventually erode the working dynamics within that high performance team.

 

Work/life balance

The final dimension is somewhat obvious and, again, very subjective, which is the level of work/life balance an individual is able to maintain and how well that aligns to their goals and needs.  In some cases, it can be that someone works more than is “required” because they enjoy what they are doing, are highly motivated, or seeking to expand their knowledge or capability in some way.  The converse, however, can also be true where an individual works in an unsustainable way, their personal needs suffer, and they end up eventually becoming less productive at work.

From the perspective of the employer, at a minimum it is a good idea to have managers check in with their team members to understand where they are in terms of having the right balance and do what they can to help enable employees to be in a place that works for them.  To the extent these discussions don’t happen, then some of the aspects of the relationship between an employee and their immediate manager may suffer and the impact from this dimension could be felt in other areas as well.

 

Wrapping up

So, bringing things together, the goal was to introduce the various dimensions of what makes a work environment engaging and positive for an employee, along with some thoughts on how I think of each of them.

If I were to attach a concept/word to each to define what good looks like, I would suggest:

  • What you do – REWARDING
  • Who you work for – TRUSTED
  • Who you work with – ENERGIZING
  • Where you work – CONVENIENT
  • What you earn – REASONABLE
  • Culture – EMPOWERING
  • Work/life balance – ALIGNED

To the degree that leaders pay attention to how they are addressing each of these seven areas, individually and collectively, I believe it will have a positive impact on the average employee experience, productivity and engagement, and the performance of the organization overall.

I hope the thoughts were worth the time spent reading them.  Feedback, as always, is welcome.

-CJG 06/16/2022