Perspective on Impact-Driven Analytics

Overview

I’ve spent a reasonable amount of time in recent years considering data strategy and how to architect an enterprise environment responsive and resilient to change.  What’s complicated matters is the many dimensions to establishing a comprehensive data strategy and the pace with which technologies and solutions have and continue to be introduced, none of which appears to be slowing down… quite the opposite.  At the same time, the focus on “data centricity” and organizations’ desire to make the most of the insights embedded within and across their enterprise systems has created a substantial pull to drive experimentation and create new solutions aimed at monetizing those insights for competitive advantage.  With the recent advent of Generative AI and large language models, the fervor surrounding analytics has only garnered more attention as to the potential it may create, not all to a favorable end.

The problem with the situation is that, not unlike many other technology “gold rush” situations that have occurred over the last thirty-one years I’ve been working in the industry, the lack of structure and discipline (or an overall framework) to guide execution can lead to a different form of technical debt, suboptimized outcomes, and complexity that ultimately doesn’t scale to the enterprise.  Hopefully this article will unpack the analytics environment and provide a way to think about the various capabilities that can be brought to bear in a more structured approach, along with the value in doing so.

Ultimately, analytics value is created in insight-driven, orchestrated actions taken, not on presentment or publication of data itself.

 

Drawing a “Real World” Comparison

The anecdotal hallmark of “traditional business intelligence” is the dashboard, which in many cases, reflects a visual representation of data contained in one or more underlying system, meant to increase end user awareness of the state of affairs, whatever the particular business need may be (this is a topic I’ve peripherally addressed in my On Project Health and Transparency article).

Having leased a new car last summer, I was both impressed and overwhelmed by the level of sophistication available to me through the various displays in the vehicle.  The capabilities have come a long way from a bunch of dials on the dashboard with a couple lights to indicate warnings.  That being said, there was a simplicity and accessibility to that design.  You knew the operating condition of the vehicle (speed, fuel, engine temp, etc.), were warned about conditions you could address (add oil, washer fluid), and situations where expert assistance might be needed (the proverbial “check engine” light).

What impressed me about the current experience design was the level of configurability involved, what I want to see on each of the displays, from advanced operating information, to warnings (exceeding the speed limit, not that this ever happens…), to suggestions related to optimizing engine performance and fuel efficiency based on analytics run over the course of a road trip.

This isn’t very different than the analytics environment available to the average enterprise, the choices are seemingly endless, and they can be quite overwhelming if not managed in some way.  The question of modeling the right experience comes down to this: starting with the questions/desired outcome, then working backwards in terms of capabilities and data that need to be brought to bear to address those needs.  Historical analytics can feel like it becomes a “data- or source-forward” mental model, when the ideal environment should be defined from the “outcome-backwards”, where the ultimate solution is rooted in a problem (or use case) meant to be solved.

 

Where Things Break Down

As I stated in the opening, the analytics landscape has gotten extremely complex in recent years and seemingly at an increasing pace.  What this can do, as is somewhat the case with large language models and Generative AI right now, is create a lot of excitement over the latest technology or solution without a sense of how something can be used or scaled within and across an enterprise.  I liken this to a rush to the “cool” versus the “useful”, and it becomes a challenge the minute it becomes a distraction from underlying realities of the analytics environment.

Those realities are:

  • Business ownership and data stewardship are critical to identifying the right opportunities and unlocking the value to be derived from analytics. Technology is normally NOT the underlying issue in having an effective data strategy, though disciplined delivery can obviously be a challenge depending on the capabilities of the organization
  • Not all data is created equal, and it’s important to discriminate in what data is accessed, moved, stored, curated, governed… because there is a business and technology cost for doing so
  • Technologies and enabling capabilities WILL change, so the way they are integrated and orchestrated is critically important to leveraging them effectively over time
  • It is easy to develop solutions that solve a specific need or use case but not to scale and integrate them as enterprise-level solutions. In an Intelligent Enterprise, this is where orders of magnitude in value and longer-term competitive advantage is created, across digitally connected ecosystems (including those with partners), starting with effective master data management and extending to newer capabilities that will be discussed below

At an overall level, while it’s relatively easy to create high-level conceptual diagrams or point solutions in relation to data and analytics, it takes discipline and thought to architect an environment that will produce value and agility at scale… that is part of what this article is intended to address.

 

Thoughts on “Data Centricity”

Given there is value to be unlocked through an effective data strategy, “data centricity” has become fairly common language as an anchor point in discussion.  While I feel that calling attention to opportunity areas can be healthy and productive, there is also a risk that concepts without substance (the antithesis of what I refer to as “actionable strategies”) can become more of a distraction than a facilitator of progress and evolution.  A similar situation arguably exists with “zero trust” right now, but that’s a topic worthy of its own article at a future date.

In the case of being “data centric”, the number of ways the language can be translated has seemed problematic to me, largely because I fundamentally believe data is only valuable to the extent it drives a meaningful action or business outcome. To that end, I would much rather be “insight-centric” or “value-focused”, “action-oriented”, or some other phrase that leans towards what we are doing with the data we acquire and analyze, not the fact that we have it, can access, store, or display it.  Those things may be part of the underlying means to an end, but they aren’t the goal in itself, and place emphasis on the road versus the destination of a journey.

To the extent that “data centricity” drives a conversation on what data a business has that may, if accessed and understood, create value, fuel innovation, and provide competitive advantage, I believe there is value in pursuing it, but a robust and thoughtful data strategy requires end-to-end thinking at a deeper level than a catch phrase or tagline on its own.

 

What “Good” Looks Like

I would submit that there are two fundamental aspects of having a robust data strategy once you address business ownership and stewardship as a foundational requirement: asking the right questions, and architecting a resilient environment.

 

Asking the Right Questions

Arguably the heading is a bit misleading here, because inference-based models can suggest improvements to move from an existing to a desired state, but the point is to begin with the problem statement, opportunity, or desired outcome, and work back to the data, insights, and actions required to achieve that result.  This is a business-focused activity and is, therefore, why establishing ownership and stewardship is so critical.

“We can accomplish X if we optimize inventory across Y locations while maintaining a fulfillment window of Z”

The statement above is different than something more “traditional” in the sense of producing a dashboard that shows “inventory levels across locations”, “fulfillment times by location”, etc. that then is intended to inform someone who ultimately may make a decision independent of secondary impacts or, better yet, recommends or enables actions to keep the inventory ecosystem calibrated in a dynamic way that continuously recalibrates to changing conditions, within defined business constraints.

While the example itself may not be perfect, the point is whether we think about analytics as presentment-focused or outcome-focused.  To the degree we focus on enabling outcomes, the requirements of the environment we establish will likely be different, more dynamic, and more biased towards execution.

 

Architecting a Resilient Environment

With the goals identified, the technology challenge becomes about enabling those outcomes, but architecting an environment that can and will evolve as those needs change and as the underlying capabilities continue to advance in what we are able to do in analytics as a whole.

What that means, and the next section will explore, is having a structured and layered approach so that capabilities can be applied, removed, and evolved with minimal disruption to other aspects of the overall environment.  This is, at its essence, a modular and composable architecture that enables interoperability through standards-based interaction across the layers of the solution in a way that will accelerate delivery and innovation over time.

The benefit to designing an interoperable environment is simple: speed, cost, and value.  As I mentioned in where things tend to break down, in technology, there should always a bias towards rapid delivery.  That being said, focusing solely on speed can tend to create a substantial amount of technical debt and monolithic solutions that don’t create cumulative or enterprise-level value.  Short-term, they may produce impact, but medium- to longer-term, they make things considerably worse once they have to be maintained and supported and the costs for doing so escalate.  Where a well-designed environment can help is in creating a flywheel effect over time to accelerate delivery using common infrastructure, integration standards, and frameworks so that the distance between idea and implementation is significantly reduced.

 

Breaking Down the Environment

The following diagram represents the logical layers of an analytics environment and some of the solutions or capabilities that can exist at each tier.  While the diagram could arguably be drawn in various ways, the reason I’ve drawn it like this is to show the separation of concerns between where content and data originates and ultimately where it’s consumed, along with the layers of processing that can occur in between.

Having the separation of concerns defined and standards (and reference architecture) established, the ability to scale, integrate new solutions and capabilities over time, and retire or modernize those that don’t create the right level of value, becomes considerably easier than when analytics solutions are purpose built in an end-to-end manner.

The next section will elaborate on each the layers to provide more insight on why they are organized in this manner.

 

Consume and Engage

The “outermost” tier of the environment is the consumption layer, where all of the underlying analytics capabilities of an organization should be brought to bear.

In the interest of transforming analytics, as was previously mentioned in the context of “data centricity”, the dialogue needs to move from “What do you want to see?” in business terms to “What do you want to accomplish and how do you want that to work from an end user standpoint?”, then employing capabilities at the lower level tiers to enable that outcome and experience (both).

The latter dimension is important, because it is possible to deliver both data and insights and not enable effective action, and the goal of a modern analytics environment is to enable outcomes, not a better presentment of a traditional dashboard.  This is why I’ve explicitly called out the role of a Digital Experience Platform (DXP) or minimally an awareness of how end users are meant to consume, engage, and interact with the outcome of analytics, ideally as part of an integrated experience that enables or automates action based on the underlying goals.

As analytics continue to move from passive and static to more dynamic and near real-time solutions, the role of data apps as an integrated part of applications or a digital experience for end users (internal or external) will become critical to delivering on the value of analytics investments.

Again, the requirements at this level are defined by the business goals or outcomes to be accomplished, questions to be answered, user workflows to be enabled, etc. and NOT the technologies to be leveraged in doing so.  Leading with technologies is almost certainly a way to head down a path that will fail over time and create technical debt in the process.

At an overall level, the reason for separating consumption and thinking of it independent of anything that “feeds” it, is that, regardless of how good the data or insights produced in the analytics environment are, if the end user can’t take effective action upon what’s delivered, there will be little value created in the solution.

 

Understand and Analyze

Once the goal is established, the capabilities to be brought to bear becomes the next level of inquiry:

  • If there is a set of activities associated with this outcome that requires workflow, rules, and process automation, orchestration should be integrated into the solution
  • If defined inputs are meant to be processed against the underlying data and a result dynamically produced, this may be a case where a Generative AI engine could be leveraged
  • If natural language input is desired, a natural language processing engine should be integrated
  • If the goal is to analyze the desired state or outcome against the current environment or operating conditions and infer the appropriate actions to be taken, causal models and inference-based analytics could be integrated. This is where causal models take a step past Generative AI in their potential to create value at an enterprise level, though the “describe-ability” of the underlying operating environment would likely play a key role in the efficacy of these technologies over time
  • Finally, if the goal is simply to run data sets through “traditional” statistical models for predictive analytics purposes (as an example), AI/ML models may be leveraged in the eventual solution

Having referenced the various capabilities above there are three important points to understand in why this layer is critical and separated from the rest:

  • Any or all of these capabilities may be brought to bear, regardless of how they are consumed by an end user, and regardless of how the underlying data is sourced, managed, and exposed.
  • Integrating them in ways are standards-based will allow them to be applied as and when needed into various solutions to create considerable cumulative analytical capability at an enterprise level
  • These capabilities definitely WILL continue to evolve and advance rapidly, so thinking about them in a plug-and-play based approach will create considerable organizational agility to respond and integrate innovations as and when they emerge over time, which translates into long-term value and competitive advantage.

 

Organize and Expose

There are three main concepts I outlined in this tier of the environment:

  • Virtualization – how data is exposed and accessed from underlying internal and external solutions
  • Semantic Layer – how data is modeled for the purpose of allowing capabilities at higher tiers to analyze, process, and present information at lower levels of the model
  • Data Products – how data is packaged for the purposes of analysis and consumption

These three concepts can be implemented with one or more technologies, but the important distinction being that they offer a representation of underlying data in a logical format that enables analysis and consumption, not necessarily that they are a direct representation of the source data or content itself.

With regard to data products in particular, while there is a significant amount of attention paid to their identification and development, they represent marginal value in an overall data strategy, especially when analytical capabilities and consumption models have evolved to such a great degree.  Where data products should be a focus (as a foundational step) is where the underlying organization and management of data is in such disarray that an examination of how to restructure and clean up the environment is important to reducing the chaos that exists in the current state.  What that implies, however, is less distractions and potential technical debt by extension, but not the kind of competitive advantage that comes from advanced capabilities and enabled consumption.  The other scenario where data products create value in themselves is when they are packaged and marketed for external consumption (e.g., credit scores, financial market data).  It’s worth noting in this case, however, that the end customer is assuming the responsibility of analyzing, integrating, and consuming those products as they are not an “end” in themselves in an overall analytics value chain.

 

Manage, Structure, and Enrich

While I listed a number of different types of solutions that can comprise a “storage” layer in the analytics environment, the best-case scenario would be that it doesn’t exist at all.  Where the storage layer creates value in analytics is providing a means to map, associate, enrich, and transform data in ways that would be too time consuming or expensive to do “on the fly” for the purposes of feeding the analytics and consumption tiers of the model.  There is certainly value, for instance, in graph databases for modeling complex many-to-many relationships across data sets, marts and warehouses for dealing with structured data, and data lakes for archival, managing unstructured data, and training of analytical models, but where source data can be exposed and streamed directly to the downstream models and solutions, there will be lower complexity, cost, and latency in the overall solution.

 

Acquire and Transmit

As capabilities continue to advance and consumption models mature, the desire for near real-time analytics will almost certainly dominate the analytics environment.  To that end, leveraging event-based processing, whether through an enterprise service or event bus, will be critical.  To the degree that enterprise integration standards can be leveraged (and canonical objects, where defined), further simplification and acceleration of analytics efforts will be possible.

Given the varied capabilities across cloud platforms (AWS, Azure, and GCP), not to mention the probability that data will be distributed between enterprise systems that could be hosted in a different cloud platform than its documents (as those in Office 365), the ability to think critically about how to integrate and synthesize across platforms is also important.  Without a defined strategy for managing multi-cloud in this domain in particular, costs for egress/ingress of data could be substantial depending on the scale of the analytics environment itself, not to mention the additional complexities that would be introduced into governance and compliance efforts surrounding duplicated content across cloud providers.

 

Generate and Provide

The lowest tier of the model is the simplest to describe, given it’s where data and content originate, which can be a combination of applications, databases, digital devices and equipment, and so forth, internal and external to an organization.  Back to the original point on business ownership and stewardship of data, if the quality of data emanating from these sources isn’t managed and governed, everything downstream will bear the fruit of the poisoned tree depending on the degree of issues involved.

Given the amount of attention given to large language models and GenAI right now, I thought it was worth noting that I consider these as another form of content generation more logically associated with the other types of solutions at this tier of the analytics model.  It could be the case that generated content makes its way through all the layers as a “data set” delivered directly to a consumer in the model, but by orienting and associating it with the rest of the sources of data, we create the potential to apply other capabilities at the next tiers of processing to that generated content, and thereby could enrich, analyze, and do more interesting things with it over time.

 

Wrapping Up

As I indicated at the opening, the modern analytics environment is complex and highly adaptive, which presents a significant challenge to capturing the value and competitive advantage that is believed to be resident in an organization’s data.

That being said, through establishing the right level of business ownership, understanding the desired outcomes, and applying disciplined thinking in how an enterprise environment is designed and constructed, there can be significant and sustainable value created for an enterprise.

I hope the ideas were thought provoking.  I appreciate those taking the time to read them.

 

-CJG 07/27/2023

Optimizing the Value of IT

Overview

Given the challenging economic environment, I thought it would be a good time to revisit something that was an active part of my work for several years, namely IT cost optimization.

In the spirit of Excellence by Design, I don’t consider cost optimization to be a moment in time activity that becomes a priority on a periodic (“once every X years”) or reactive basis.  Optimizing the value/cost ratio is something that should always be a priority in the interest of having disciplined operations, maintaining organizational agility, technical relevance, and competitive advantage.

In the consulting business, this is somewhat of a given, as most clients want more value for the money they spend on an annualized basis, especially if the service is something provided over a period of time.  Complacency is the fastest path to lose a client and, consequently, there is a direct incentive to look for ways to get better at what you do or provide equivalent service at a lower cost to the degree the capability itself is already relatively optimized.

On the corporate side, however, where the longer-term ramifications of technology decisions bear out in accumulated technical debt and complexity, the choices become more complex as they are less about a project, program, or portfolio and become more focused on the technology footprint, operating model, and organizational structure as a whole.

To that end, I’ll explore various dimensions of how to think about the complexity and makeup of IT from a cost perspective along with the various levers to explore in how to optimize value/cost.  I’m being deliberate in mentioning both because it is very easy to reduce costs and have an adverse impact on service quality or agility, and that’s why thoughtful analysis is important in making informed choices on improving cost-efficiency.

Framing the Problem

Before looking at the individual dimensions, I first wanted to cover the simple mental model I’ve used for many years in terms of driving operating performance:

 

The model above is based on three connected components that feed each other in a continuous cycle:

  • Transparency
    • We can’t govern what we can’t see. The first step in driving any level of thoughtful optimization is having a fact-based understanding of what is going on
    • This isn’t about seeing or monitoring “everything”. It is about understanding the critical, minimum information that is needed to make informed decisions and then obtaining as accurate a set of data surrounding those points as possible.
  • Governance
    • With the above foundation in place, the next step is to have leadership engagement to review and understand the situation, and identify opportunities to improve.
    • This governance is a critical step in any optimization effort because, if there are not sustainable organizational or cultural changes made in the course of transforming, the likelihood of things returning to a similar condition will be relatively high.
  • Improvement
    • Once opportunities are identified, executing effectively on the various strategies becomes the focus, with the goal of achieving the outcomes defined through the governance process
    • The outcomes of this work should then be reflected in the next cycle of operating metrics and the cycle can be repeated on a continuing basis.

The process for optimizing IT costs is no different than what is expressed here: understand the situation first, then target areas of improvement, make adjustments, continue.  It’s a process, not a destination.  From here, we’ll explore the various dimensions of complexity and cost within IT, and the levers to consider in adjusting them.

 

At an Operating-Level

Before delving into the footprint itself, a couple areas to consider at an overall level are portfolio management and release strategy.

 

Portfolio management

Given that I am mid-way through writing an article on portfolio management and am also planning a separate one on workforce and sourcing strategy, I won’t explore this topic much beyond saying that having a mature portfolio management process can help influence cost-efficiency

That being said, I don’t consider ineffective portfolio management to be a root cause of IT value/cost being imbalanced.  An effective workforce and sourcing strategy that aligns variable capacity to sources of demand fluctuation (within reasonable cost constraints) should enable IT to deliver significant value even during periods of increased business demand.  That being said, a lack of effective prioritization, disciplined estimation and planning, resource planning, and sourcing strategy in combination with each other can have significant and harmful effects on cost-efficiency and, therefore, generally provide opportunities for improvement.

Some questions to consider in this area:

  • Is prioritization effective in your organization? When “priority” effort arise, are other ongoing efforts stopped or delayed to account for them or is the general trend to take on more work without recalibrating existing commitments?
  • Are estimation and planning efforts benchmarked, reviewed, analyzed and improved, so the integrity of ongoing prioritization and slotting of projects can be done effectively?
  • Is there a defined workforce and sourcing strategy to align variable capacity to fluctuating demand so that internal capacity can be reallocated effectively and sourcing scaled in a way that doesn’t disproportionately have an adverse impact on cost? Conversely, can demand decline without significant need for recalibration of internal, fixed capacity?  There is a situation I experienced where we and another part of the organization took the same level of financial adjustment, but they had to make 3x the level of staffing adjustment given we were operating under a defined sourcing strategy and the other organization wasn’t.  This is an important reason to have a workforce and sourcing strategy.
  • Is resource planning handled on an FTE (e.g., role-based) or resource-basis (e.g., named resource), or some combination thereof? What is the average utilization of “critical” resources across the organization on an ongoing basis?

Release strategy

This is an area that often seems overlooked in my experience (outside product delivery environments) as a means to both improve delivery effectiveness, manage cost, and improve overall quality.

Having a structured release strategy that accounts for major and minor releases, with defined criteria and established deployment windows, versus an arbitrary or ad-hoc approach can be a significant benefit both from an IT delivery and business continuity perspective.  Generally speaking, delivery cycles (in a non-CI/CD, DevSecOps-oriented environment) tend to consume time and energy that slows delivery progress.  The more windows that exist, the more disruption that occurs over a calendar year.  When those windows are allowed to occur on an ad-hoc basis, the complexities of integration testing, configuration management, and coordination from a project, program, and change management perspective tends to increase proportional to the number of release windows involved.  Similarly, the risk of quality issues occurring within and across a connected ecosystem increases as the process for stabilizing and testing individual solutions, integrating across solutions, and managing post-deployment production issues is spread across multiple teams in overlapping efforts.  Where standard integration patterns and reference architecture is in place to govern interactions across connected components, there are means to manage and mitigate risk, but generally speaking, it’s better and more cost-effective to manage a smaller set of larger, scheduled release windows than allow a more random or ad-hoc environment to exist at scale.

 

Applications

In the application footprint, larger organizations or those built through acquisition tend to have a fairly diverse and potentially redundant application landscape, which can lead to significant cost and complexity, both in maintaining and integrating the various systems in place.  This is also true when there is a combination of significant internally (custom) developed solutions working in concert with external SaaS solutions or software packages.

Three main levers can have a significant influence along the lines of what I discuss in The Intelligent Enterprise:

  • Ecosystem Design
    • Whether one chooses to refer to this as business architecture, domain-driven design, component architecture, or something else, the goal is to identify and govern a set of well-defined connected ecosystems that are composable, made up of modular components that provide a clear business (or technical) capability or set of services
    • This is critical enabler to both optimizing the application footprint as well as promoting interoperability and innovation over time, as new capabilities can be more rapidly integrated into a standards-based environment
    • Where complexity comes about is where custom or SaaS/package solutions are integrated in a way that blurs these component boundaries and creates functional overlaps that create technical debt, redundancy, data integrity issues, etc.

 

  • Integration strategy
    • With a set of well-defined components, the secondary goal is to leverage standard integration patterns with canonical objects to promote interoperability, simplification, and ongoing evolution of the technology footprint over time.
    • Without standards for integration, an organization’s ability to adopt new, innovative technologies will be significantly hindered over time and the leverage of those investments marginalized, because of the complexity involved in bringing those capabilities into the existing environment rapidly without having refactor or rewrite a portion of what exists to leverage them.
    • At an overall level, it is hard to argue that technologies are advancing at a rate faster than any organization’s ability to adopt and integrate them, so having a well-defined and heavily leveraged enterprise integration strategy is critical to long-term value creation and competitive advantage.

 

  • Application Rationalization
    • Finally, with defined ecosystems and standards for integration, having the courage and organizational leadership to consolidate like solutions to a smaller set of standard solutions for various connected components can be a significant way to both reduce cost and increase speed-to-value over time.
    • I deliberately focused on the organizational aspects of rationalization, because one of the most significant obstacles in technology simplification is the courageous leadership needed to “pick a direction” and handle the objections that invariably result in those tradeoff decisions being made.
    • Technology proliferation can be caused by a number of things, but organizational behaviors can certainly contribute when two largely comparable solutions exist without one of them being retired solely based on resistance to change or perceived control or ownership associated with a given solution.
    • At a capability-level, evaluating similar solutions, understanding functional differences and associating the value with those dimensions is a good starting point for simplifying what is in place. That being said, the largest challenge in application rationalization doesn’t tend to be identifying the best solution, it’s having the courage to make the decision, commit the investment, and execute on the plan given “new projects” tend to get more organizational focus and priority in many companies than cleaning up what they already have in place.  In a budget-constrained environment, the new, shiny thing tends to win in a prioritization process, which is something I’ll write about in a future article.

Overall, the larger the organization, the more opportunity may exist in the application domain, and the good news is that there are many things that can be done to simplify, standardize, rationalize, and ultimately optimize what’s in place in ways that both reduce cost and increase the agility, speed, and value that IT can deliver.

 

Data

The data landscape and associated technologies, especially when considering advanced analytics, has significantly added complexity (and likely associated cost) in the last five to ten years in particular.  With the growing demand for AI/ML, NLP, and now Generative AI-enabled solutions, the ability to integrate, manage, and expose data, from producer to ultimate consumer has taken on significant criticality.

Some concepts that are directionally important in my opinion in relation to optimizing value/cost in data and analytics enablement:

  • Managing separation of concerns
    • Similar to the application environment, thinking of the data and analytics environment (OLTP included) as a set of connected components with defined responsibilities, connected through standard integration patterns is important to reducing complexity, enabling innovation, and accelerating speed-to-value over time
    • Significant technical debt can be created where the relationship of operational data stores (ODS), analytics technologies, purpose-built solutions (e.g., graph or time series databases) master data management tools, data lakes, lake houses, virtualization tools, visualization tools, data quality tools, and so on are not integrated in clear, purposeful ways.
    • Where I see value in “data centricity” is in the way it serves as a reminder to understand the value that can be created for organizations in leveraging the knowledge embedded within their workforce and solutions
    • I also, however, believe that value will be unlocked over time through intelligent applications that leverage knowledge and insights to accelerate business decisions, drive purposeful collaboration, and enable innovation and competitive advantage. Data isn’t the outcome, it’s an enabler of those outcomes when managed effectively.

 

  • Minimizing data movement
    • The larger the landscape and number of solutions involved in moving source data from the original producer (whether it’s a connected application, device, or piece of equipment) to the end consumer (however that consumption is enabled) has a significant impact on innovation and business agility.
    • As such, concepts like data mesh / data fabric, enabling distributed sourcing of data in near-real time with minimized data movement to feed analytical solutions and/or deliver end user insights is critical in thinking through a longer-term data strategy.
    • In a perfect world, where data enrichment is not a critical requirement, the ability to virtualize, integrate, and expose data across various sources to conceptually “flatten” the layers of the analytics environment is an area where end consumer value can be increased while reducing cost typically associated with ETL, storage, and compute spread across various components of the data ecosystem
    • Concepts like zero ETL, data sharing, and virtualization are also key enablers that have promise in this regard

 

  • Limiting enabling technologies
    • As in the application domain, the more diverse and complex a data ecosystem is, the likelihood that a diverse set of overlapping technologies is in place, with overlapping or redundant capabilities.
    • At a minimum, a thoughtful process for reviewing and governing any new technology introductions, to evaluate how they complement, replace, or are potentially redundant or duplicative with solutions already in place is an important capability to have in place
    • Similarly, it is not uncommon to introduce new technologies with somewhat of a “silver bullet” mindset, without considering the implications for supporting or operating those solutions, which can increase cost and complexity, or having a deliberate plan to replace or retire other solutions that provide a similar capability in the process.
    • Simply said, technical debt accumulates over time, through a set of individually rationalized and justified, but overall suboptimized short-term decisions.
  • Rationalize, simplify, standardize
    • Finally, where defined components exist, data sourcing and movement is managed, and technologies introductions are governed, there should be an ongoing effort to modernize, simplify, and standardize what is already in place.
    • Data solutions can tend to be very “purpose-built” in their orientation to the degree that the enable a specific use case or outcome. The problem that occurs in this situation is if the desired business architecture becomes the de facto technical architecture and significant complexity is created in the process.
    • Using a parallel, smaller scale analogy, there is a reason that logical and physical data modeling are separate activities in application development (the former in traditional “business design” versus the latter being part of “technical design” in waterfall-based approaches). What makes sense from a business or logical standpoint likely won’t be optimized if architected as defined in that context (e.g., most business users don’t think intuitively in third normal form, nor should they have to).
    • Modern technologies allow for relatively cheap storage and giving thought to how the underlying physical landscape should be designed from producer to consumer is critical in both enabling insight delivery at speed, but also doing so within a managed, optimized technology environment.

Overall, similar to the application domain, there are significant opportunities to enable innovation and speed-to-value in the data and analytics domain, but a purposeful and thoughtful data strategy is the foundation for being cost-effective and creating long-term value.

 

Technologies

I’ve touched on technologies through the process of discussing optimization opportunities in both the application and data domains, but it’s important to understand the difference between technology rationalization (the tools and technologies you use to enable your IT environment) and application or data rationalization (the solutions that leverage those underlying technologies to solve business problems).

The process for technology simplification is the same as described in the other two domains, so I won’t repeat the concepts here beyond reiterating that a strong package or technology evaluation process (that considers the relationship to existing solutions in place) and governance of new technology introductions with explicit plans to replace or retire legacy equivalents and ensure organizational readiness to support the new technologies in production is critical to optimizing value/cost in this dimension.

 

Infrastructure

At an overall level, unless there is a significant compliance, competitive, privacy, or legal reason to do so, I would argue that no one should be in the infrastructure business unless it IS their business.  That may be a somewhat controversial point-of-view, but at a time when cloud and hosting providers are both established and mature, arguing the differentiated value of providing (versus managing) these capabilities within a typical IT department is a significant leap of faith in my opinion.  Internal and external customer value and innovation is created in the capabilities delivered through applications, not the infrastructure, networking, and storage underlying those solutions.  This isn’t to say these capabilities aren’t a critical enabler.  They definitely are, though, the overall organizational goal in infrastructure from my perspective should be to ensure quality of service at the right cost (through third party providers to the maximum extent possible), and then manage and govern the reliability and performance of that set of environments, focusing on continuous improvement and enabling innovation as required by consuming solutions over time.

There are a significant number of cost elements associated with infrastructure, a lot of financial allocations involved, and establishing TCO through these indirect expenses can be highly complex in most organizations.  As a result, I’ll focus on three overall categories that I consider significant and acknowledge there is normally opportunity to optimize value/cost in this domain beyond these three alone (cloud, hosted solutions, and licensing).  This is partially why working with a defined set of providers and managing and governing the process can be a way to focus on quality of service and desired service levels within established cost parameters versus taking on the challenge of operationalizing a substantial set of these capabilities internally.

Certainly, a level of core network and cyber security infrastructure is necessary and critical to an organization under any circumstances, something I will touch on in a future article on the minimum requirements to run an innovation-centric IT organization, but even in those cases, that does not imply or require that those capabilities be developed or managed internally.

 

Cloud

With the ever-expanding set of cloud-enabled capabilities, there are three critical watch items that I believe have significant impact on cost optimization over time:

  • Innovation
    • Cloud platform providers are making significant advancements in their capabilities on an annual basis, some of which can help enable innovation
    • To the extent that some of the architecture and integration principles above are leveraged, and a thoughtful, disciplined process is used to evaluate and manage introduction of new technologies over time, organizations can benefit from their leverage of cloud as a part of their infrastructure strategy

 

  • Multi-cloud Integration
    • The reality of cloud providers today is also that no one is good at everything and there is differentiated value in various services provided from each of them (GCP, Azure, AWS)
    • The challenge is how to integrate and synthesize these differentiated capabilities in a secure way without either creating significant complexity or cost in the process
    • Again, having a modular, composable architecture mindset with API- or service-based integration is critical in finding the right balance for leveraging these capabilities over time
    • Where significant complexity and cost can be created is where data egress comes into play from one cloud platform to another and, consequently, the need for such data movement should be minimized in my opinion to situations where the value of doing so (ideally without persisting the data in the target platform) greatly outweighs the cost to operate in that overall environment

 

  • FinOps Discipline
    • The promise of having managed platforms that convert traditional capex to opex is certainly an attractive argument for moving away from insourced and hosted solutions to the cloud (or a managed hosting provider for that matter). The challenge is in having a disciplined process for leveraging cloud services, understanding how they are being consumed across an organization, and optimizing their use on an ongoing basis.
    • Understandably, there is not a direct incentive for platform providers to optimize this on their own and tools largely provide transparency into spend related to consumption of various services over time.
    • Hopefully, as these providers mature, we’ll see more of an integrated platform within and across cloud providers to help continuously optimize a footprint so that it provides reliability and scalability, but also without promoting over provisioning or other costs that don’t provide end customer value in the process.

Given the focus of this article is cost optimization and not cloud strategy, I’m not getting into cloud modernization, automation and platform services, containerization of workloads, or serverless computing, though arguably some of those also can provide opportunities to enable innovation, improve reliability, enable edge-based computing, and optimize value/cost as well.

 

Internally Managed / Hosted

Given how far we are into the age of cloud computing, I’m assuming that legacy environments have largely been moved into converged infrastructure.  In some organizations, this may not be the case and should be evaluated along with the potential for outsourcing the hosting and management of these environments where possible (and competitive) at a reasonable value/cost level.

One interesting anecdote is how organizations don’t tend to want to make significant investments in modernizing legacy environments, particularly those in financial services resting on mainframe or midrange computing solutions.  That being said, given that they are normally shared resources, as the burden of those costs shift (where teams selectively modernize and move off those environments) and allocations of the remaining MIPS and other hosting charges are adjusted, the priority in revisiting those strategies tends to change.  Being proactive on modernization should be a continuous, proactive process rather than a reactive one, because the resulting technology decisions can otherwise be suboptimized and turned into lift-and-shift based approaches versus true modernization or innovation opportunities (I’d consider this under the broader excellence topic of relentless innovation).

 

Licensing

The last infrastructure dimension that I’d call out in relation to licensing.  While I’ve already addressed the opportunity to promote innovation and optimize expense through rationalizing applications, data solutions, or underlying technologies individually, there are three other dimensions that are worth consideration:

  • Partner Optimization
    • Between leverage of multi-year agreements on core, strategic platforms and consolidation of tools (even in a best-of-breed environment) to a smaller set of strategic, third-party providers, there are normally opportunities to reduce the number of technology partners and optimize costs in large organizations
    • The watch item would be to ensure such consolidation efforts consider volatility in the underlying technology environment (e.g., the commitment might be too long for situations where the pace of innovation is very high) while also ensuring conformance to the component and integration architecture strategies of the organization so as not to create dependencies that would make transition of those technologies more complex in the future

 

  • Governance and Utilization
    • Where licensing costs are either consumption-based or up for renewal, having established practices for revisiting the value and usage of core technologies over time can help in optimization. This can also be important in ensuring compliance to critical contract terms where appropriate (e.g., named user scenarios, concurrent versus per-seat agreements)
    • In one example a number of years ago, we decided to investigate indirect expense coming through software licenses and uncovered nearly a million dollars of software that had been renewed on an annual basis that wasn’t being utilized by anyone. The reality is that we treated these as bespoke, fixed charges and no one was looking at them at any interval.  All we needed to do in that case was pay attention and do the homework.

 

  • Transition Planning
    • The most important of these three areas is akin to having a governance process in place.
    • With regard to transition, establishing a companion process to the software renewal cycle for critical, core technologies (i.e., those providing a critical capability or having significant associated expense). This process would involve a health check (similar to package selection, but including incumbent technologies/solutions) at a point commensurate with the window of time it would take to evaluate and replace the solution if it was no longer the best option to provide a given capability.
    • Unfortunately, depending on the level of dependency that exists for third-party solutions, it is not uncommon for organizations to lack a disciplined process to review technologies in advance of their contractual renewal period and be forced to extend their licenses because of a lack of time to do anything else.
    • The result can be that organizations deploy new technologies in parallel with ones that are no longer competitive purely because they didn’t plan in advance for those transitions to occur in an organic way

Similar to the other categories, where licensing is a substantial cost component of IT expense, the general point is to be proactive and disciplined about managing and governing it.  This is a source of overhead that is easy to overlook and that can create undue burden on the overall value/cost equation.

 

Services

I’m going to write on workforce and sourcing strategy separately, so I won’t go deeply into this topic or direct labor in this article beyond a few points in each.

In optimizing cost of third-party provided services, a few dimensions come to mind:

  • Sourcing Strategy
    • Understanding and having a deliberate mapping of primary, secondary and augmentation partners (as appropriate) for key capabilities or portfolios/solutions is the starting point for optimizing value/cost
    • Where a deliberate strategy doesn’t exist, the ability to monitor, benchmark, govern, manage, and optimize will be both complex and effective only on a limited basis
    • Effective sourcing and certain approaches to how partners are engaged can also be a key lever in both enabling rapid execution of key strategies, managing migration across legacy and modernized environments, establishing new capabilities where a talent base doesn’t currently exist internal to an organization, and in optimizing expense that may be either fragmented across multiple partners or enabled through contingency labor in ad-hoc ways, all of which can help optimize the value/cost ratio on an ongoing basis

 

  • Vendor Management
    • Worth noting that I’m using the word “vendor” here because the term is fairly well understood and standard when it comes to this process. In practice, I never use the word “vendor” in deference to “partner” as I believe the latter signals a healthy approach and mindset when it comes to working with third-parties.
    • Having worked in several consulting organizations over a number of years, it was very easy to tell which clients operated in a vendor versus a partnership mindset and the former of the two can be a disincentive to making the most of these relationships
    • That being said, organizations should have an ongoing, formalized process for reviewing key partner relationships, performance against contractual obligations, on-time delivery commitments, quality expectations, management of change, and achievement of strategic partner objectives.
    • There should also be a process in place to solicit ongoing feedback both on how to improve effectiveness and the relationship but also to understand and leverage knowledge and insights a partner has on industry and technology trends and innovation opportunities that can further increase value/cost performance over time.

 

  • Contract Management
    • Finally, having a defined, transparent, and effective process for managing contractual commitments and the associated incentives where appropriate can also be important to optimizing overall value/cost
    • It is generally true that partners don’t deliver to standards that aren’t established and governed
    • Defining service levels, quality expectations, utilizing fixed price or risk sharing models and so on and then reviewing and holding both partners and the internal organization working with those partners accountable to those standards is important in having both a disciplined operating and a disciplined delivery environment
    • There’s nothing wrong with assuming everyone will do their part when it comes to living into the terms of agreements, but there also isn’t harm in keeping an eye on those commitments and making sure that partner relationships are held to evolving standards that promote maturity, quality, and cost effectiveness over time

Similar to other categories, the level of investment in sourcing, whether through professional service firms or contingent labor, should drive the level of effort involved in understanding, governing, and optimizing it, but some level of process and discipline should be in place almost under any scenario.

 

Labor

The final dimension to optimizing value and cost is direct labor.  I’m guessing, in writing this, that it’s fairly obvious I put this category last and I did so intentionally.  It is often said that “employees are the greatest source of expense” in an organization.  Interestingly enough “people are our greatest asset” has also been said many times as well.

In the section on portfolio management, I mentioned the importance of having a workforce and sourcing strategy and understanding the relationship between the alignment of people to demand on an ongoing basis.  That is a given and should be understood and evaluated with a critical eye towards how things flex and adjust as demand fluctuates.  It is also a given and assumed that an organization focused on excellence should be managing performance on a continuing basis (including times of favorable market conditions) so as not to create organizational bloat or ineffectiveness.  Said differently, poor performance that is unmanaged in an organization drags down average productivity, has an adverse impact on quality, and ultimately a negative impact on value cost because the working capacity of an organization isn’t being applied to ongoing demand and delivery needs effectively.  Where this is allowed to continue unchecked over too long a duration, the result may be an over-correction that also can have adverse impacts on performance, which is why it should be an ongoing area of focus by comparison with an episodic one.

Beyond performance management, I believe it’s important to think of all of the expense categories before this one to be variable, which is sometimes not the case in the way they are evaluated and managed.  If non-direct labor expense is substantial, a different question to consider is the relative value of “working capacity” (i.e., “knowledge workers”) by comparison with expense consumed in other things.  Said differently, a mental model that I used with a team in the past was that “every million dollars we save in X (insert dimension or cost element here… licensing, sourcing, infrastructure, applications) is Y people we can retain to do meaningful work.

Wrapping Up

Understanding that this has been a relatively long article, but still only a high-level treatment of a number of these topics, hopefully it has been useful in calling out many of the opportunities that are available to promote excellence in operations and optimize value/cost over time.

In my experience, having been in multiple organizations that have realigned costs, it takes engaged and courageous leadership to make thoughtful changes versus expedient ones… it matters… and it’s worth the time invested to find the right balance overall.  In a perfect world, disciplined operations should be a part of the makeup of an effectively led organization on an ongoing basis, not the result of a market correction or fluctuation in demand or business priorities.

 

Excellence always matters, quality and value always matter.  The discipline it takes to create and manage that environment is worth the time it takes to do it effectively.

 

Thank you for taking the time to read the thoughts.  As with everything I write, feedback and reactions are welcome.  I hope this was worth the investment in time.

-CJG 04/09/2023

What Music Taught Me About Business

It’s been a while since I’ve had a chance to post an article, so I thought I’d take a pause on two of them that are in process and write about the relationship I’ve seen between music and work, because it’s been the source of reflection at different points in time over the years.

To set the stage, I’ve been playing the drums for over forty years, performing various styles of music, from trio jazz and big band, to blues, R&B, fusion, rock, pop music, and probably others I’m not remembering at the moment.  At my busiest time, I played forty jobs with eight groups over the course of a year separate from my “day job”, which was a lot to handle, but very fun at the same time.  Eventually, when there wasn’t time to “play out”, I started a YouTube channel where I continue to record and share music to the extent that I can.  The point being that music has been a lifelong passion, and I’ve learned things over time that have parallels to what I’ve experienced in the work environment, which I wanted to share here.

To provide some structure, I’ll tackle this in three parts:

  • Performing and high-performance teams
  • Being present in the moment
  • Tenacity, commitment, and role of mentoring

Performing and High-Performance Teams

I’ll start with a simple point: the quality of the music you hear in a live performance is not solely about the competence of the individual musicians, it’s how they play as a group.

Having performed with many musicians over the years, one of the amazing feelings that occurs is when you are in the middle of a song, everyone is dialed in, the energy level is palpable, and it feels like anything is possible with where the music could go.  It’s particularly noticeable in settings like small group jazz, where you can change the style of the song on the fly, with the nod of your head or a subtle rhythmic cue to the other musicians, and suddenly everyone moves in a different direction.  The same is possible in other styles of music, but sometimes with less range of options.  The energy in those moments is amazing, both for the performers and the audience, in part because everyone is creating together and the experience is very fluid and dynamic as it unfolds.

There are three things that make an experience like this possible:

  • Everyone has to be engaged in what’s going on
  • Everyone has to be listening, communicating, and collaborating in an effective manner
  • The group has to be comprised of highly capable players, because the overall quality of the experience will be limited by the least effective or competent of the collaborators

It’s not difficult to see how this relates to a business setting, where high performance teams achieve greater results because of their ability to make the most of everyone’s individual contributions through effective communication and collaboration.  Where teams don’t communicate or adapt and evolve together, productivity and impact are immediately compromised.

The litmus test for the above could be asking the questions:

  • Is everyone on a team engaged?
  • Is everyone listening, contributing, and collaborating effectively (regardless of their role)?
  • Is everyone “on their game” at an individual performance level?

If any of the above isn’t a resounding “yes”, then there is probably opportunity to improve.

 

Being Present In the Moment

The second observation also relates to performing and handling mistakes.

One thing that I’ve always enjoyed about performing live is the challenge of creating an incredible experience for the audience.  That experience really comes down to the energy they feel from the performers and putting everything you have out there over the course of a show so there is nothing left to give by the time you’re done. 

What is very liberating and fun in that environment is that an audience doesn’t care what you do “as a day job” when they arrive at the venue or club where you’re performing.  As far as they are concerned, you’re “in the band” and the expectation (at some level) is that you’re going to be a professional and perform at a level that meets or exceeds their expectations.

Two things are interesting by comparison with a work environment in this regard:

  • First, it doesn’t matter as a performer what you did yesterday, last week, last month, or last year when you step on stage. The only thing the audience cares about is how you show up that night.  It’s a great mental model for the business environment, where it can be easy to become complacent, fall back on things that one did in the past and forget that we prove ourselves in the value we create each day.
  • Second, the minute you make a mistake on stage (and the more you stretch for things in the course of a performance, the more likely it will occur), you recover and you move forward. You don’t waste time looking backwards because music is experienced in the moment you create it, the moment passes, and there is a new opportunity to make something special happen.  This is something I’ve struggled with and worked on over time because, as a highly motivated person, it’s frustrating to make mistakes and that can lead to a tendency to beat yourself up over them when they happen.  Unfortunately, while there is a benefit to reflecting and learning from mistakes, the most important thing when they occur is not to let one mistake lead to another one, but rather to focus, recover, and make adjustments so the next set of things you do are solid.

On the latter point, it’s worth noting that when I record music for my channel, I try to do so using a single take every time.  I do that because it’s the most like live performance and forces a level of focus and intensity as a result.  The approach can lead to a mistake here or there in a recording, and that’s ok.  I’d rather make mistakes reaching for something difficult than do something “perfectly” that is easy by comparison.

 

Tenacity, Commitment, and the Role of Mentoring

The final portion of this article is a story in a couple stages, and it’s about dealing with adversity.

Dealing with Failure

When I arrived my freshman year at the University of Illinois, I signed up for tryouts for both the concert and jazz bands.  In the case of concert band, I knew what to expect (for the most part), given there were prepared selections, a couple solo pieces you were asked to prepare, and a level of sightreading that you were asked to do in the audition itself.  The audition was in front of a set of fourteen people, including all of the directors, and, while relatively uncomfortable, I nailed the parts pretty well, made the First Concert Band, and I was set.  Felt good… Check the box.

For jazz band, I didn’t know what to expect, other than going to sign up for a time slot at the Music building, showing up, and just doing whatever was needed.  Having played for at least two years in high school leading up to the audition, I thought I had things together and wasn’t really worried.  That was probably my first mistake: complacency.  What I didn’t consider was that, in any given year, there were between 24-32 drummers trying out for between 6-8 spots in the four jazz bands at the time, and some positions were already assumed given guys returning from the previous year (even though they were in the pool of people “auditioning”).  Generally speaking, none of the people auditioning were bad players either and it was much more competitive than I realized or prepared for.  As is probably obvious, I did the audition, didn’t necessarily mess anything up, but I also didn’t crush it either.  I didn’t make it and was massively disappointed, because I didn’t want to have to give up on playing for an entire year.

This is where the first pivot happened, which is that I decided, regardless of having failed, I wasn’t going to stop playing.  I brought my drums down to school, even though I had to squeeze them into the closet of my dorm room.  I met with the director of the second jazz band and told him that, while I didn’t make it, I wanted to form a small group and to be able to have a place to practice, and asked for his help.  In response, he not only got me access and permission to have a room in the music building where I could go to practice (myself or with others), but also he gave me the contact sheet for everyone who made the four jazz bands.  I then called everyone on the list, starting with the top band, asking if anyone was interested in getting together to play.  Eventually, I was able to cobble together a group between members of the third and fourth band, we met a number of times over the course of the year, I had a friend help shuttle me and my equipment to the music building, and I was able to keep playing despite the situation.

Separately, I also attended many of the performances of the bands over the course of the year, so I could see the level of ability of the drummers who did make the cut as well as the style and relative difficulty of the music they had to play.  In this regard, I wanted to understand what was expected so I could prepare myself effectively the following year.

The learning for me from this, many years before I experienced it in a professional setting, was that I don’t look at challenges or adversity as a limiting constraint, I see them as something to be worked around and overcome. That is ultimately about tenacity and commitment.  I could have spent that year on the sidelines, but instead I found another way to get valuable experience, play with musicians who were in the bands and build some relationships and, probably (to a degree) make an impression on one of the directors that I was willing to do whatever it took to keep playing. 

 

The Role of a Mentor

Having found a way to stay active, the other primary thing that was on my mind heading into the summer after my freshman year was doing everything I could to not fail again.

I sought out a teacher who was very well known, with an exceptional reputation in Chicago, having taught for probably forty years at the time.  He was teaching through a local community college, so I signed up to a “class” for private instruction and we scheduled two, one-hour lessons a week.  In preparation, he told me to buy two books, both of which looked like they were printed in 1935 (ok, probably more like 1952), and I immediately thought I might have made a mistake.

Despite the painful learning of the previous year, I somehow went into my first lesson thinking, “ok, I’m going to impress him, and he’ll help me figure out what went wrong last fall and fix it.”  That wasn’t how things went.  Rather than have me play anything on the drum set, he had me read from one of the books, on a practice pad, in a way that felt like I was back in third grade doing basic percussion stuff I hadn’t really thought about in a long time.  That was my first lesson: it took him less than 5 minutes to establish where I was at and rip me down to my foundation. There was no “impressing” him, there was work to do and seemingly quite a lot of it.  He then took notes in each of the books and gave me assignments to work on, the last of which was to apply patterns from the second book to the drum set, which was essentially a third activity in itself.  That lesson was on Monday.  I had until Thursday.  I left thinking “there is literally no way I’m going to be ready.”

And so, it began.  I set up my drums and a practice pad in my parents’ garage, and set out practicing two hours a day, every day.  I wanted to show him I could do it.  I got to the next lesson and nailed every one of the exercises perfectly.  He nodded his approval marked up both books again… and I left thinking “there is literally no way I’m going to be able to do that again.”  The next lesson was Monday.

I did the same thing, practiced two hours a day, nailed it all, he gave me more, and the cycle repeated.  By the end of the summer, we completed both books, a set of work on brush techniques, latin music styles, and some other things he had in his bag of tricks.  I never missed a single lesson.  I never missed a day of practice the entire summer

Overall, once I got past those first couple lessons, two things happened:

  • I didn’t want to disappoint him
  • I didn’t want to blow the streak I had going. I wanted to finish with a perfect record

Returning in the fall, I was completely in control of what I was doing and I made the fourth jazz band.  He told me I was one of the best students he ever had, which was very humbling given what an exceptional teacher he was and over so many decades and students he taught.

In retrospect, part of what really drove me was the level of respect I had for him as a teacher and mentor.  He was very direct and not always gentle in his choice of words, but his goal was discipline and excellence, and it was clear that he was only invested in making me as good as I could be if I put in the work.

The parallels to the work environment are pretty obvious here as well, which is the value of hard work itself and having a good mentor to guide you along the way.  A great coach knows how to help you address your gaps in the interest of being the best you can be, but you also have to be open and receptive to that teaching and that’s not always easy when all of us want to believe we’re fundamentally doing a “good job”.  Sometimes our greatest challenges are basic blocking and tackling issues that he made evident to me within five minutes of our first lesson.

 

Wrapping Up

I’ve said for many years that I wish I could think of things “at work” the way that I do when I perform.  In both cases, I strive for excellence, but in the case of music, I think I’ve historically been more accepting of the reality that mistakes are part of learning and getting better, probably because I don’t believe as much is “at stake” when I play versus when I work.

Hopefully some of the ideas have been worth sharing.  Thanks for taking the time to read them.  Feedback and reactions are welcome as always.

-CJG 09/29/2022

Defining a “Good Job”

In line with establishing the right environment to achieve Excellence by Design, I thought it would be worthwhile to explore the various dimensions that define a great workplace. 

In my experience, these conversations can tend to be skewed in one or two directions, but rarely seem holistic in terms of thinking through the various aspects of the employee experience.  Maintaining a healthy workplace and driving retention is ultimately about striking the right balance for an individual on terms ultimately defined by them.

I’ll cover the concept at an overall level, then address each of the dimensions in how I think of them in our current post-covid and employee-driven environment.

 

The Seven Dimensions

At a broad-level, the attributes that I believe define an employee’s experience are:

  • What you do
  • Who you work for
  • Who you work with
  • Where you work
  • What you earn
  • Culture
  • Work/life balance

In terms of maintaining a productive workplace, I believe that a motivated, engaged employee will ultimately want the majority of the above dimensions to be in line with their expectations

As a litmus test, take a look at each of the above attributes and ask whether that aspect of your current job is where it should be (in a “yes”/”no”/”sort of” context).  If “sort of” was an answer, I’d argue that should be counted as a “no”, because you’re presumably not excited, or you would have said “yes” in the first place.  If three or more of your answers are “no”, you probably aren’t satisfied with your job and would consider the prospect of a change if one arose.

While it can be the case that a single attribute (e.g., being significantly undercompensated, having serious issues with your immediate manager) can lead to dissatisfaction and (ultimately) attrition, my belief is that each of us tend to consider most of the above dimensions when we evaluate the conditions of our employment or other opportunities when they arise.

From the perspective of the employer, the key is to think through how the above dimensions are being addressed to create balance and a positive environment for the employees at an individual level.  Anecdotally, that balance translates into how someone might describe their job to a third-party, such as “I work a lot of long hours… BUT… I’m paid very well for what I do”.  In this example, while work/life balance may be difficult, compensation is indexed in a way that it makes up for the difference and puts things into balance.  Similarly, someone could say “I don’t make a lot of money… BUT… I love the people I work with and what I get to do each day.”

The key question from a leadership standpoint is whether we only consider one or two dimensions in “attracting and retaining the best talent” or if instead we are thoughtful and deliberate about considering the other mechanisms that drive true engagement.  What we do with intention turns into meaningful action… and what we leave to chance, puts employees in a potentially unhealthy situation that exposes companies to unnecessary risk of attrition (not to mention a poor reputation in the marketplace as a prospective employer).

Having laid that overall foundation, I’ll provide some additional thoughts on the things that I believe matter in each dimension.

 

What you do

Fundamental to the employee experience is the role you play, the title you hold, how well it aligns to your aspirations, and whether you derive your desired level satisfaction from it, even if that manifestation is as simple as a paycheck.

Not everyone wants to solve world hunger and that’s ok.  Aligning individual needs and capabilities to what people do every day creates the conditions for success and job satisfaction.

One simple thing that can be done from an employer’s standpoint beyond looking for the above alignment is to recognize and thank people for the work they do on an ongoing basis.  It amazes me how the easiest thing to do is say “thank you” when people do a good job, and yet how often that isn’t acknowledged.  Recognition can mean so much to the individual, to know their work is appreciated and valued, yet it is something I’ve seen lacking in nearly every organization I’ve worked over the last thirty years.  Often the reasoning given is that leaders are “too busy”, which is unfortunate, because no one should ever be so busy that a “thank you” isn’t worth the time it takes to send it.

 

Who you work for

There is an undeniable criticality to the relationship between an employee and their immediate manager, but I believe the perception of the broader leadership in the organization matters as well

Starting at the manager, the litmus test for a healthy situation could be some of the following questions:

  • Is there trust between the individual and their manager?
  • Does the employee believe their manager has their best interest at heart and is invested in them, personally and professionally?
  • Does the employee believe their manager will be an effective advocate for them in terms of compensation, advancement, exposure to other opportunities, etc.?
  • Does the employee see their manager as an enabler or as an obstacle when it comes to decision making?
  • Does the employee derive meaningful feedback and coaching that helps them learn and develop their capabilities over time?
  • Does the employee feel comfortable, supported, and recognized in their day-to-day work, especially when they take risks in the interest of pursuing innovation and stretch goals?

At an organizational level, the questions are slightly different, but influence the situation as well:

  • Does the organization recognize, appreciate, and promote individual contributions and accomplishments?
  • Does the organization promote and demonstrate a healthy and collaborative climate amidst and across its leadership?
  • Do the actions of leaders follow their words? Is there integrity and transparency overall?

Again, while the tendency is to think about the employee experience in terms of their immediate manager, how they perceive the organizational leadership as a whole matters, because it can contribute to their willingness to stay and possibly become part of that leadership team down the road.  Is that environment a desirable place for an employee to be?  If not, why would they contribute at a level that could lead them there?

 

Who you work with

The people you work with in the context of your job can take on multiple dimensions, especially when you are in a services business (like consulting), where your environment is a combination of people from your organization and the clients with whom you work on an ongoing basis.  Having worked with some highly collaborative and also some very aggressive clients over the years, those interactions can definitely have an impact on your satisfaction with what you do, particularly if those engagements are longer-term assignments.

From an “internal” standpoint, your team (for those leading others), your peers, your internal customers, and so on tend to define your daily experience.  While I consider culture separate from the immediate team, there is obviously a relationship between the two.

Regardless of the overall culture of the organization, as I wrote about in my Engaged Leadership and Setting the Tone article, our day-to-day interactions with those directly collaborating with us can be very different.

Some questions to consider in this regard:

  • Do individuals contribute in a healthy way, collaborate and partner effectively, and maintain a generally positive work environment?
  • Do people listen and are they accepting of alternate points of view?
  • Does the environment support both diversity and inclusion?
  • Is there a “we” versus a “me” mentality in place?
  • Do you trust the people with whom you’re working on an ongoing basis?
  • Can you count on the people with whom you work to deliver on their commitments, take accountability, communicate with you effectively, and help you out when you need it?

Again, there are many dimensions that come into the daily experience of an employee, and it depends on the circumstances and role in terms of what to consider in evaluating the situation.

 

Where you work

In the post-covid era, I think of location in terms of three dimensions, the physical location of where you work, whether you can work remotely, and the level of travel that is required as part of your job.

For base location, there can be various considerations that weigh in on the employee experience, assuming they physically need to go to the workplace.  Ease of access (e.g., if it’s in a congested metropolitan area), nearby access to other points of interest (e.g., something major cities offer, but smaller, rural locations generally don’t), the level and nature of commuting involved (and whether that is manageable), cost of living considerations, the safety of the area surrounding the workplace itself, etc.

Where remote work is an option, I’m strongly biased towards leaning in the direction of employee preference.  If an individual wants to be in the office, then there should be reasonable accommodation for it, but conversely, if they prefer a fully remote environment, then that should be supported as well.  In the world of technology, given that distributed teams and offshoring have been in place for decades, it’s difficult to argue that it’s impossible to be effective in an environment where people aren’t physically co-located.  Where collaboration is beneficial, certainly it is possible to bring people together in a workshop-type setting and hammer out specific things.  My belief is, however, that it’s possible to work in a largely remote setting and maintain healthy relationships so long as people are more deliberate (e.g., scheduling individual meetings to connect) than when they are physically co-located.

Finally, when it comes to travel, this is again measured on the preferences of an individual.  I’ve gone from jobs where there was little to no travel involved to one where I did the “road warrior” life and traveled thirty-three weeks in one year… and it was grueling.  That being said, I have friends who have lived on the road for many years (largely in consulting) and loved it, so empirically the impact of travel on job satisfaction depends on lot on the person and whether they enjoy it.

 

What you earn

Compensation is actually one of the easier dimensions to cover, because it’s tangible and measurable.  As an employer, you either compensate people in relation to the market value of the work they are performing, or you don’t, but the data is available and employees can do their own due diligence to ascertain whether your compensation philosophy is to be competitive or not.  With market conditions being what they are, it seems self-defeating to not be competitive in this regard, because there are always abundant opportunities out there for capable people, and not paying someone fairly seems like a very avoidable reason to lose talent.

Where I have apprehension in the discussion, both as an employee and a person who has communicated it to individuals, is when an organization approaches the conversation as “let’s educate you on how to think about total compensation”… and then presents a discussion on everything other than base pay.  Is there a person who doesn’t consider their paycheck as their effective compensation on an ongoing basis?  Conversely, is there anyone who has left a job because the primary reason was they didn’t like the choice in healthcare provider in the benefit plan or the level of a 401(k) matching contribution? The latter scenarios are certainly possible, though I doubt they represent the majority of compensation-related attrition situations.

Of course, variable compensation can and does matter from an overall perspective, as do other forms of incentives such as options, equity, and so forth.  I’ve worked in organizations and seen models that involve pretty much every permutation, including where variable compensation is formula-based (with or without performance inputs), fixed at certain thresholds, or determined on a largely subjective basis.  That being said, in a tough economy with the cost of about everything on the rise, most people aren’t going to look towards a non-guaranteed variable income component (discretionary or otherwise) to help them cover their ongoing living expenses.  Nice to have?  Absolutely.  The foundation for a sense of employee security in an organization?  Definitely not.

 

Culture

Backing up to the experience of the workplace as a whole, I separate culture from the people with whom an employee works for a reason.  In most organizations, culture is manifest in two respects: what a company suggests it is and what it actually is.

Across the seven organizations where I’ve been fortunate to work over the last thirty years, only two of them actually seemed to live into the principles or values that they expressed as part of their culture.  The implication for the other five organizations was that the actual culture was something different and, to the extent that reality was not always healthy, it had a negative impact on the desirability of the workplace overall.

The role culture can play can be both energizing and engaging to the degree it is a positive experience.  If it is the opposite, then the challenge becomes what was referenced in the team section, which is your ability to establish a “culture within the culture” that is healthier for the individual employee.  This is somewhat of a necessary evil from my perspective, because changing an overall culture within an organization is extremely challenging (if not impossible) and takes a really long time, even with the best of intentions.  In practice, having a sub-culture that is associated with a team is, at best, a short-term fix however, because ultimately most teams need to partner and collaborate with others outside their individual unit and unhealthy behaviors and communication in a culture at large will eventually erode the working dynamics within that high performance team.

 

Work/life balance

The final dimension is somewhat obvious and, again, very subjective, which is the level of work/life balance an individual is able to maintain and how well that aligns to their goals and needs.  In some cases, it can be that someone works more than is “required” because they enjoy what they are doing, are highly motivated, or seeking to expand their knowledge or capability in some way.  The converse, however, can also be true where an individual works in an unsustainable way, their personal needs suffer, and they end up eventually becoming less productive at work.

From the perspective of the employer, at a minimum it is a good idea to have managers check in with their team members to understand where they are in terms of having the right balance and do what they can to help enable employees to be in a place that works for them.  To the extent these discussions don’t happen, then some of the aspects of the relationship between an employee and their immediate manager may suffer and the impact from this dimension could be felt in other areas as well.

 

Wrapping up

So, bringing things together, the goal was to introduce the various dimensions of what makes a work environment engaging and positive for an employee, along with some thoughts on how I think of each of them.

If I were to attach a concept/word to each to define what good looks like, I would suggest:

  • What you do – REWARDING
  • Who you work for – TRUSTED
  • Who you work with – ENERGIZING
  • Where you work – CONVENIENT
  • What you earn – REASONABLE
  • Culture – EMPOWERING
  • Work/life balance – ALIGNED

To the degree that leaders pay attention to how they are addressing each of these seven areas, individually and collectively, I believe it will have a positive impact on the average employee experience, productivity and engagement, and the performance of the organization overall.

I hope the thoughts were worth the time spent reading them.  Feedback, as always, is welcome.

-CJG 06/16/2022

Excellence By Design

Background

As I began this journey and subsequently to assemble topics about which to write, I noticed that there were both an overwhelming set of ideas coming (a good problem to have) and a very unclear relationship in the concepts that were running quite rapidly through my mind (not a good thing).

Upon further reflection, it occurred to me that the ideas all centered around the various dimensions of leading a technology organization at different levels of specificity.  To that end, I thought I should set the stage a bit, in the interest of making things more cohesive in what I may write from here.

 

On the Pursuit of Excellence

At an overall level, what better place to start than a simple premise: Excellence is a choice.

Shooting for excellence is a commitment that requires a lot on a practical level, starting with courageous leadership, because it is a perpetually moving target, requires adaptability, tenacity, and a willingness to accept change as a way of life.  Excellence isn’t accidental, it is a matter of organizational will and the passion to pursue aspirations beyond what, at times, may feel “realistic” or “practical”.  It requires a belief in what is possible and is defined along multiple dimensions, which we’ll explore briefly here, namely:

  • Relentless Innovation
  • Operating with Agility
  • Framework-Driven Design -and-
  • Delivering at Speed

Relentless Innovation

Starting with vision, some questions to consider in the context of an overall strategy:

  • Is it clear and understood across the organization, along with its intended outcome (e.g., what success looks like)?
  • Is it one that connects to individuals in the organization, their roles and ongoing contributions, or are those disconnected concepts (i.e., is it something that individuals take to heart)?
  • Can it evolve as circumstances change while maintaining a degree of fundamental integrity (e.g., will it stand the test of time or need to be continually redefined)?
  • Is it actionable? Can tangible steps be taken to drive progress towards its ultimate goals?
  • Is it “deliberate”/intended/proactive or was it defined in a reactive context (e.g., in response to a competitor’s actions)?
  • Are day-to-day decisions made with the strategy in mind?

Overall, the point is to have a thoughtful, proactive strategy, that is actionable, connected to ongoing decisions, and embraced by the broader organization.

Where this becomes more interesting is in how we think of strategy in relation to change, which is where the next concept comes into play.  Relentless innovation is the notion that anything we are doing today may be irrelevant tomorrow, and therefore we should continuously improve and reinvent our capabilities to ones that create the most long-term value. This is much easier said than done, because it requires a lot of organizational humility and a willingness to tear down existing structures and rebuild new ones in their place.  That forces a degree of risk tolerance, because there is safety in the established practices and solutions of today, especially if they’ve created value.  On the other hand, success can be very detrimental insofar as complacency can become part of the organizational mindset and change slows down to an environment that is essentially an iteration of the present.

 

Operating with Agility

Looking at IT Operations, a number of questions come to mind that may be the subject of future articles:

  • Is there a mindset of being cost-efficient (driving the highest value/cost ratio)?
  • Is there a culture of continuous improvement and innovation in place?
  • Is there a strategy for incorporating and optimizing the relationship of project and product teams (to the extent that a full product orientation isn’t feasible)?
  • Is there a sourcing strategy in place that is deliberate, governed, optimized (whether insourced, outsourced, or some combination thereof)?
  • Are portfolio management processes effective and aligned to business strategy?
  • Is there a highly transparent, but extremely lightweight operating infrastructure in place to facilitate engagement and value creation?
  • To what degree is talent rotation and development part of the culture? Are people stuck in the same organization or silo for long periods of time, or are high potential leaders moved between teams to facilitate a higher degree of knowledge sharing, development, and improvement?

Having worked in IT Ops, the largest issue I’ve seen in a number of companies is an overly significant focus on process and infrastructure by comparison with transparency and enablement.  This is a tricky balance to strike, but arguably, I’d much rather have a less “mature” operating environment (IT for IT) that produces directionally correct information and drives engagement than a heavy, cumbersome process that becomes a distraction from producing business outcomes.  A simple litmus test on the latter type of environment being in place is whether, in discussion, teams talk about the process and tools versus the outcomes, decisions, and impact.

 

Framework-Driven Design

Shifting focus to technology, I believe the opportunity is to think differently about the overall solution architecture of future ecosystems.  Much has been written and discussed relative to modern or cloud native applications, data-centric design, DevSecOps, domain-driven design, and so on.

What fundamentally bothers me about solution design approaches is that, when focusing on one dimension (e.g., data centricity), other dimensions of the more holistic view of modern application design is left out, and then it becomes a challenge to delivery teams to integrate one or more of these concepts in practice without a way to synthesize them into one cohesive approach.  This is where framework-centric design can be an interesting approach to consider.

In my definition, framework-centric design is focused on architecting a connected ecosystem and operating environment intended to promote resiliency, interoperability, and application-agnostic integration such that individual solution components can be upgraded or replaced over time at a rapid pace without disrupting the capability of the ecosystem as a whole.

I will explore this topic further in a future article, but the base premise is to design an overall solution that performs complex tasks given multiple components integrated in standardized ways, leveraging modern, cloud native technologies, with integrated data that feeds embedded analytics capabilities as part of the operation of the ecosystem.

The framework itself, therefore, becomes a platform and the individual components are treated as replaceable parts that enable a best-of-breed mentality as new capabilities emerge that become advantageous to integrate with the framework over time.

 

Delivering at Speed

From a delivery standpoint, as tempting as it is to write about iterative development (or Agile in particular) as a cure all, the reality is that more organizations suffer from a lack of discipline than a lack of methodology. 

The unfortunate myth that needs to be explored and unwound is that executing with discipline means value will be delayed when, in fact, the exact opposite is true.  It is a generalization, but the faster a build team moves (to the extent that process or rigor is abandoned), the immediate impact is usually a level of technical debt that will create drag, either in the initial or subsequent delivery efforts.

Quality doesn’t happen by accident.  It is something that needs to be planned and built into a work product from the kickoff of a delivery effort, regardless of the methodology or operating model employed.

I will likely write more on this topic given the number of opportunities that exist, but it’s sufficient to say that you can’t achieve excellence when you don’t execute as flawlessly as possible… and discipline is needed to accomplish that.

 

Wrapping Up

Overall, the goal was to provide a quick summary of the various dimensions that I believe are important to consider in leading an organization.  No doubt, there may be questions or omissions (intentional or unintended) as this was a first blush at how I think about it. 

What about people and culture?  Well… that’s part of operating effectively… as an example.

Hopefully this was a good starting point and provided some food for thought.  Feedback, questions, and reactions are always welcome.

Looking forward to continuing this journey.

-CJG 10/28/2021

Where to Begin…

Seven organizations… nearly thirty years of challenges and learning… what a journey it has been.

In mid-2016, I had an idea to start a blog, “The First 25…”, which I envisioned as a set of reflections to commemorate the first twenty-five years of my professional life.

I began creating an outline, starting from my first work in Windows 3.1 tax software development at Price Waterhouse in 1992, jotting down concepts, anecdotes, and learnings that I had accumulated.  I intended for it to be both a means to share my experiences as well as a mechanism to facilitate introspection and growth.  I started writing the first set of articles, ran out of time, lost interest, and put the concept on the shelf for five years… until I recently felt like maybe it was time to revisit the idea.

It’s an odd truth that we learn to value experience only once we actually have it… and how relative that all becomes over the passage of time.  In my early days as a developer, I remember looking at job postings in the newspaper, seeing ads for candidates with “15 years of experience” and thinking that you both had to be a dinosaur and some form of fully enlightened professional by the time you reached that point in your career.  Fifteen years is a REALLY long time, after all, right?  That had to be all you’d need to learn everything there was to learn and have it down to the point you were almost on autopilot with anything the business world could throw at you.  In retrospect, perhaps that logic makes a degree of sense when you are 21 years old and 15 years represents almost 75% of your lifetime…

In any case, looking back, there have been many lessons and experiences that helped me become the person I am today.  From the fear and inexperience of my first days managing projects, to the many late nights pounding away at a keyboard, trying to get that next piece of code working… the setbacks, the struggles, and quite often the painful mistakes that taught me the value of “how not to” do things.

My hope is to use this blog to share some of those experiences, thoughts on leadership, strategy, technology, and other topics as and when they arise…

A career is really nothing more than a collection of experiences and what we ultimately do with them to become better leaders and professionals… Hopefully the ideas will be worth reading, feedback is welcome and appreciated.  I’m still on the journey, seeking that enlightenment I thought I’d have when I got to “15 years”… albeit today, I know I’ll never get there and just want to keep learning and improving to have as much of a positive impact as I can on the teams with whom I work and the organization(s) to which I belong.

Thank you for taking the time to read this and/or anything else that may follow.  All the best in your professional endeavors…

-CJG 08/11/2021