Enterprise Architecture in an Adaptive World

Overview

Having covered a couple future-oriented topics on Transforming Manufacturing and The Future of IT, I thought it would good to come back to where we are with Enterprise Architecture as a critical function for promoting excellence in IT.

Overall, there is a critical balance to be struck in technology strategy today: technology-driven capabilities are advancing faster than any organization can reasonably adopt and integrate them (as is the exposure in cyber security), even if you could, the change management issues you’d cause on end users would be highly disruptive, and thereby undermine your desired business outcomes, and, in practice rapidly evolving, sustainable change is the goal, not any one particular “implementation” of the latest thing.  This is what Relentless Innovation is about, referenced in my article on Excellence by Design.

 

Connecting Architecture back to Strategy

In the article, Creating Value Through Strategy, I laid out a framework for thinking about IT strategy at an overall level that can be used to create some focal points for enterprise architecture efforts in practice, namely:

  • Innovate – leveraging technology advancements in ways that promote competitive advantage
  • Accelerate – increasing speed to market/value to be more responsive to changing needs
  • Optimize – improving the value/cost ratio to drive return on technology investments overall
  • Inspire – creating a workplace that promotes retention and enables the above objectives
  • Perform – ensuring reliability, security, and performance in the production environment

The remainder of this article will focus on how enterprise architecture (EA) plays a role in enabling each of these dimensions given the pace of change today.

 

Breaking it Down

Innovate

Adopting new technologies for maximum business advantage is certainly the desired end game in this dimension, but unless there is a very unique, one-off situation, the role of EA is fairly critical in making these advancements leverageable, scalable, and sustainable.  It’s worth noting, by the way, that I’m specifically referring to “enterprise architecture” here, not “solution architecture”, which I would consider to be the architecture and design of a specific business solution.  One should not exist without the other and, to the degree that solution architecture is emphasized without a governing enterprise architecture framework in place, the probability of significant technical debt, delivery issues, lack of reliability, and a host of other issues will skyrocket.

Where EA plays a role in promoting innovation is minimally in exploring market trends and looking for enabling technologies that can promote competitive advantage, but also, and very critically in establishing the standards and guidelines by which new technologies should be introduced and integrated into the existing environment.

Using a “modern” example, I’ve seen a number of articles of late on the role of GenAI in “replacing” or “disrupting” application development, from the low-code/no code type solutions to the SaaS/package software domain, to everywhere.  While this sounds great in theory, it shouldn’t take long for the enterprise architecture questions to surface:

  • How do I integrate that accumulated set of “point solutions” in any standard way?
  • How do I meaningfully run analytics on the data associated with these applications?
  • How do I secure these applications in a way that I’m not exposed to vulnerabilities that I would with any open-source technology (i.e., they are generated by an engine that may have inherent security gaps)?
  • How do I manage the interoperability between these internally-developed/generated solutions and standard packages (ERP, CRM, etc.) that are likely a core part of any sizeable IT environment?

In the above example, even if I find way to replace existing low-code/no code solutions with a new technology, it doesn’t mean that I don’t have the same challenges as exist with leveraging those technologies today.

In the case of innovation, the highest priorities for EA are therefore: looking for new disruptive technologies in the market, defining standards to enable their effective introduction and use, and then governing that delivery process to ensure standards are followed in practice.

 

Accelerate

Speed to market is a pressing reality in any environment I’ve seen, though it can lead to negative consequences as I discussed in Fast and Cheap… Isn’t GoodCertainly, one of the largest barriers to speed is complexity, and complexity can come in many forms depending on the makeup of the overall IT landscape, the standards, processes, and governance in place related to delivery, and the diversity in solutions, tools, and technologies that are involved in the ecosystem as a whole.

While I talk about standards, reuse, and governance in the broader article on IT strategy, I would argue that the largest priority for EA in terms of accelerating delivery is in rationalization of solutions, tools, and technologies in use overall.

The more diverse the enterprise ecosystem is, the more difficult it becomes to add, replace, or integrate new solutions over time, and ultimately this will slow delivery efforts down to a snail’s pace (not to mention making them much more expensive and higher risk over time).

Using an example of a company that has performed many acquisitions over time, looking for opportunities to simplify and standardize core systems (e.g., moving to a single ERP versus having multiple instances and running consolidations through a separate tool) can lead to significant reduction in complexity over time, not to mention making it possible to redeploy resources to new capability development versus being spread across multiple redundant production solutions.

 

Optimize

In the case of increasing the value/cost ratio, the ability to rationalize tools and solutions should definitely lead to reduced cost of ownership (beyond the delivery benefit mentioned above), but the largest priority should be in identifying ways to modernize on a continual basis.

Again, in my experience, modernization is difficult to prioritize and fund until there is an end-of-life or end-of-support scenario, at which point it becomes a “must do” priority, and causes a significant amount of delivery disruption in the process.

What I believe is a much better and healthier approach to modernization is a more disciplined, thoughtful approach that is akin to “urban renewal”, where there is an annual allocation of work directed at modernization on a prioritized basis (the criteria for which should be established through EA, given an understanding of other business demand), such that significant “events” are mitigated and it becomes a way of working on a sustained basis.  In this way, the delineation between “keep the lights on” (KTLO) support, maintenance (which is where modernization efforts belong), and enhancement/ build-related work is important.  In my experience, that second maintenance bucket is too often lumped into KTLO work, it is underserved/underfunded, and ultimately that creates periodic crises in IT to remediate things that should’ve been addressed far sooner (as a much lower cost) if a more disciplined portfolio management strategy was in place.

 

Inspire

In the interest of supporting the above objectives, having the right culture and skills to support ongoing evolution is imperative.  To that end, the role of EA should be in helping to inform and guide the core skills needed to “lean forward” into advanced technology, while maintaining the right level of competency to support the footprint in place.

Again, this is where having a focus on modernization can help, as it creates a means to sunset legacy tools and technologies, to enable that continuous evolution of the skills the organization needs to operate (whether internally or externally sourced).

 

Perform

Finally, the role of EA in the production setting could be more or less difficult depending on how well the above capabilities are defined and supported in an enterprise.  To the degree standards, rationalization, modernization, and the right culture and skills are in place, the role of EA would be helping to “tune” the environment to perform better and at a lower cost to operate.

Where there is a priority need for EA is ensuring there is an integrated approach to cyber security that aligns to development processes (e.g., DevSecOps) and a comprehensive, integrated strategy to monitor and manage performance in the production environment so that production incidents (using ITIL-speak) can be minimized and mitigated to the maximum degree possible.

 

Wrapping Up

Looking back on the various dimensions and priorities outlined above in relation to the role of EA, perhaps there isn’t much that I can argue is very different than what the role entailed five or ten years ago… establish standards, simplify / rationalize, modernize, retool, govern… that being said, the pace at which these things need to be accomplished and the criticality of doing them well is more important than ever with the increasing role technology plays in the digital enterprise.  Like other dimensions required to establish excellence in IT, courageous leadership is where this needs to start, because it takes discipline to do things “right” while still doing them at a pace and with an agility that discerns the things that matter to an enterprise versus those that are simply ivory tower thinking.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/27/2024

The Future of IT

Overview

Background

I’ve been thinking about writing this article for a while, with the premise of “what does IT look like in the future?”  In a digital economy, the role of technology in The Intelligent Enterprise will certainly continue to be creating value and competitive business advantage.  That being said, one can reasonably assume a few things that are true today for medium to large organizations will continue to be part of that reality as well, namely:

  • The technology footprint will be complex and heterogenous in its makeup. To the degree that there is a history of acquisitions, even more so
  • Cost will always be a concern, especially to the degree it exceeds value delivered (this is explored in my article on Optimizing the Value of IT)
  • Agility will be important in adopting and integrating new capabilities rapidly, especially given the rate of technology advancement only appears to be accelerating over time
  • Talent management will be complex given the variety of technologies present will be highly diverse (something I’ve started to address in my Workforce and Sourcing Strategy Overview article)

My hope is to provide some perspective in this article on where I believe things will ultimately move in technology, in the underlying makeup of the footprint itself, how we apply capabilities against it, and how to think about moving from our current reality to that environment.  Certainly, all of the five dimensions of what I outlined in my article on Creating Value Through Strategy will continue to apply at an overall strategy level (four of which are referenced in the bullet points above).

A Note on My Selfish Bias…

Before diving further into the topic at hand, I want to acknowledge that I am coming from a place where I love software development and the process surrounding it.  I taught myself to program in the third grade (in Apple Basic), got my degree in Computer Science, started as a software engineer, and taught myself Java and .Net for fun years after I stopped writing code as part of my “day job”.  I love the creative process for conceptualizing a problem, taking a blank sheet of paper (or white board), designing a solution, pulling up a keyboard, putting on some loud music, shutting out distractions, and ultimately having technology that solves that problem.  It is a very fun and rewarding thing to explore those boundaries of what’s possible and balance the creative aspects of conceptual design with the practical realities and physical constraints of technology development.

All that being said, insofar as this article is concerned, when we conceptualize the future of IT, I wanted to put a foundational position statement forward to frame where I’m going from here, which is:

Just because something is cool and I can do it, doesn’t mean I should.

That is a very difficult thing to internalize for those of us who live and breathe technology professionally.  Pride of authorship is a real thing and, if we’re to embrace the possibilities of a more capable future, we need to apply our energies in the right way to maximize the value we want to create in what we do.

The Producer/Consumer Model

Where the Challenge Exists Today

The fundamental problem I see in technology as a whole today (I realize I’m generalizing here) is that we tend to want to be good at everything, build too much, customize more than we should, and throw caution to the wind when it comes to things like standards and governance as inconveniences that slow us down in the “deliver now” environment in which we generally operate (see my article Fast and Cheap, Isn’t Good for more on this point).

Where that leaves us is bloated, heavy, expensive, and slow… and it’s not good.  For all of our good intentions, IT doesn’t always have the best reputation for understanding, articulating, or delivering value in business terms and, in quite a lot of situations I’ve seen over the years, our delivery story can be marred with issues that don’t create a lot of confidence when the next big idea comes along and we want to capitalize on the opportunity it presents.

I’m being relatively negative on purpose here, but the point is to start with the humility of acknowledging the situation that exists in a lot of medium to large IT environments, because charting a path to the future requires a willingness to accept that reality and to create sustainable change in its place.  The good news, from my experience, is there is one thing going for most IT organizations I’ve seen that can be a critical element in pivoting to where we need to be: a strong sense of ownership.  That ownership may show up as frustration in the status quo depending on the organization itself, but I’ve rarely seen an IT environment where the practitioners themselves don’t feel ownership for the solutions they build, maintain, and operate or have a latent desire to make them better.  There may be a lack of a strategy or commitment to change in many organizations, but the underlying potential to improve is there, and that’s a very good thing if capitalized upon.

Challenging the Status Quo

Pivoting to the future state has to start with a few critical questions:

  • Where does IT create value for the organization?
  • Which of those capabilities are available through commercially available solutions?
  • To what degree are “differentiated” capabilities or features truly creating value? Are they exceptions or the norm?

Using an example from the past, a delivery team was charged with solving a set of business problems that they routinely addressed through custom solutions, even though the same capabilities could be accomplished through integration of one or more commercially available technologies.  From an internal standpoint, the team promoted the idea that they had a rapid delivery process, were highly responsive to the business needs they were meant to address, etc.  The problem is that the custom approach actually cost more money to develop, maintain, and support, was considerably more difficult to scale.  Given solutions were also continually developed with a lack of standards, their ability to adopt or integrate any new technologies available on the market was non-existent.  Those situations inevitably led to new custom solutions and the costs of ownership skyrocketed over time.

This situation begs the question: if it’s possible to deliver equivalent business capability without building anything “in house”, why not do just that?

In the proverbial “buy versus build” argument, these are the reasons I believe it is valid to ultimately build a solution:

  • There is nothing commercially available that provides the capability at a reasonable cost
    • I’m referencing cost here, but it’s critical to understand the TCO implications of building and maintaining a solution over time. They are very often underestimated.
  • There is a commercially available solution that can provide the capability, but something about privacy, IP, confidentiality, security, or compliance-related concerns makes that solution infeasible in a way that contractual terms can’t address
    • I mention contracting purposefully here, because I’ve seen viable solutions eliminated from consideration over a lack of willingness to contract effectively, and that seems suboptimal by comparison with the cost of building alternative solutions instead

Ultimately, we create value in business capability enabled through technology, “who” built them doesn’t matter.

Rethinking the Model

My assertion is that we will obtain the most value and acceleration of business capabilities when we shift towards a producer/consumer model in technology as a whole.

What that suggests is that “corporate IT” largely adopts the mindset of the consumer of technologies (specifically services or components) developed by producers focused purely on building configurable, leverageable components that can be integrated in compelling ways into a connected ecosystem (or enterprise) of the future.

What corporate IT “produces” should be limited to differentiated capabilities that are not commercially available, and a limited set of foundational capabilities that will be outlined below.  By trying to produce less and thinking more as a consumer, this should shift the focus internally towards how technology can more effectively enable business capability and innovation and externally towards understanding, evaluating, and selecting from the best-of-breed capabilities in the market that help deliver on those business needs.

The implication, of course for those focused on custom development, would be to move towards those differentiated capabilities or entirely towards the producer side (in a product-focused environment), which honestly could be more satisfying than corporate IT can be for those with a strong development inclination.

The cumulative effect of these adjustments should lead to an influx of talent into the product community, an associated expansion of available advanced capabilities in the market, and an accelerated ability to eventually adopt and integrate those components in the corporate environment (assuming the right infrastructure is then in place), creating more business value than is currently possible where everyone tries to do too much and sub-optimizes their collective potential.

Learning from the Evolution of Infrastructure

The Infrastructure Journey

You don’t need to look very far back in time to remember when the role of a CTO was largely focused on managing data centers and infrastructure in an internally hosted environment.  Along the way, third parties emerged to provide hosting services and alleviate the need to be concerned with routine maintenance, patching, and upgrades.  Then converged infrastructure and the software-defined data center provided opportunities to consolidate and optimize that footprint and manage cost more effectively.  With the rapid evolution of public and private cloud offerings, the arguments for managing much of your own infrastructure beyond those related specifically to compliance or legal concerns are very limited and the trajectory of edge computing environments is still evolving fairly rapidly as specialized computing resources and appliances are developed.  The learning being: it’s not what you manage in house that matters, it’s the services you provide relative to security, availability, scalability, and performance.

Ok, so what happens when we apply this conceptual model to data and applications?  What if we were to become a consumer of services in these domains as well?  The good news is that this journey is already underway, the question is how far we should take things in the interest of optimizing the value of IT within an organization.

The Path for Data and Analytics

In the case of data, I think about this area in two primary dimensions:

  • How we store, manage, and expose data
  • How we apply capabilities to that data and consume it

In terms of storage, the shift from hosted data to cloud-based solutions is already underway in many organizations.  The key levers continue to be ensuring data quality and governance, finding ways to minimize data movement and optimize data sharing (while facilitating near real-time analytics), and establishing means to expose data in standard ways (e.g., virtualization) that enable downstream analytic capabilities and consumption methods to scale and work consistently across an enterprise.  Certainly, the cost of ingress and egress of data across environments is a key consideration, especially where SaaS/PaaS solutions are concerned.  Another opportunity continues to be the money wasted on building data lakes (beyond archival and unstructured data needs) when viable platform solutions in that space are available.  From my perspective, the less time and resources spent on moving and storing data to no business benefit, the more energy that can be applied to exposing, analyzing, and consuming that data in ways that create actual value.  Simply said, we don’t create value in how or where we store data, we create value in how consume it.

On the consumption side, having a standards-based environment with a consistent method for exposing data and enabling integration will lend itself well to tapping into the ever-expanding range of analytical tools on the market, as well as swapping out one technology for another as those tools continue to evolve and advance in their capabilities over time.  The other major pivot being to minimize the amount of “traditional” analytical reporting and business intelligence solutions to more dynamic data apps that leverage AI to inform meaningful end-user actions, whether that’s for internal or external users of systems.  Compliance-related needs aside, at an overall level, the primary goal of analytics should be informed action, not administrivia.

The Shift In Applications

The challenge in the applications environment is arbitrating the balance between monolithic (“all in”) solutions, like ERPs, and a fully distributed component-based environment that requires potentially significant management and coordination from an IT standpoint. 

Conceptually, for smaller organizations, where the core applications (like an ERP suite + CRM solution) represent the majority of the overall footprint and there aren’t a significant number of specialized applications that must interoperate with them, it likely would be appropriate and effective to standardize based on those solutions, their data model, and integration technologies.

On the other hand, the more diverse and complex the underlying footprint is for a medium- to large-size organization, there is value in looking at ways to decompose these relatively monolithic environments to provide interoperability across solutions, enable rapid integration of new capabilities into a best-of-breed ecosystem, and facilitate analytics that span multiple platforms in ways that would be difficult, costly, or impossible to do within any one or two given solutions.  What that translates to, in my mind, is an eventual decline of the monolithic ERP-centric environment to more of a service-driven ecosystem where individually configured capabilities are orchestrated through data and integration standards with components provided by various producers in the market.  That doesn’t necessarily align to the product strategies of individual companies trying to grow through complementary vertical or horizontal solutions, but I would argue those products should create value at an individual component level and be configurable such that swapping out one component of a larger ecosystem should still be feasible without having to abandon the other products in that application suite (that may individually be best-of-breed) as well.

Whether shifting from a highly insourced to a highly outsourced/consumption-based model for data and applications will be feasible remains to be seen, but there was certainly a time not that long ago when hosting a substantial portion of an organization’s infrastructure footprint in the public cloud was a cultural challenge.  Moving up the technology stack from the infrastructure layer to data and applications seems like a logical extension of that mindset, placing emphasis on capabilities provided and value delivered versus assets created over time.

Defining Critical Capabilities

Own Only What is Essential

Making an argument to shift to a consumption-oriented mindset in technology doesn’t mean there isn’t value in “owning” anything, rather it’s meant to be a call to evaluate and challenge assumptions related to where IT creates differentiated value and to apply our energies towards those things.  What can be leveraged, configured, and orchestrated, I would buy and use.  What should be built?  Capabilities that are truly unique, create competitive advantage, can’t be sourced in the market overall, and that create a unified experience for end users.  On the final point, I believe that shifting to a disaggregated applications environment could create complexity for end users in navigating end-to-end processes in intuitive ways, especially to the degree that data apps and integrated intelligence becomes a common way of working.  To that end, building end user experiences that can leverage underlying capabilities provided by third parties feels like a thoughtful balance between a largely outsourced application environment and a highly effective and productive individual consumer of technology.

Recognize Orchestration is King

Workflow and business process management is not a new concept in the integration space, but it’s been elusive (in my experience) for many years for a number of reasons.  What is clear at this point is that, with the rapid expansion in technology capabilities continuing to hit the market, our ability to synthesize a connected ecosystem that blends these unique technologies with existing core systems is critical.  The more we can do this in consistent ways, the more we shift towards a configurable and dynamic environment that is framework-driven, the more business flexibility and agility we will provide… and that translates to innovation and competitive advantage over time.  Orchestration is a critical piece of deciding which processes are critical enough that they shouldn’t be relegated to the internal workings of a platform solution or ERP, but taken in-house, mapped out, and coordinated with the intention of creating differentiated value that can be measured, evaluated, and optimized over time.  Clearly the scalability and performance of this component is critical, especially to the degree there is a significant amount of activity being managed through this infrastructure, but I believe the transparency, agility, and control afforded in this kind of environment would greatly outweigh the complexity involved in its implementation.

Put Integration in the Center

In a service-driven environment, clearly the infrastructure for integration, streaming in particular, along with enabling a publish and subscribe model for event-driven processing, will be critical for high-priority enterprise transactions.  The challenge in integration conversations in my experience tends to be defining the transactions that “matter”, in terms of facilitating interoperability and reuse, and those that are suitable for point-to-point, one off connections.  There is ultimately a cost for reuse when you try to scale, and there is discipline needed to arbitrate those decisions to ensure they are appropriate to business needs.

Reassess Your Applications/Services

With any medium to large organization, there is likely technology sprawl to be addressed, particularly if there is a material level of custom development (because component boundaries likely won’t be well architected) and acquired technology (because of the duplication it can cause in solutions and instances of solutions) in the landscape.  Another complicating factor could be the diversity of technologies and architectures in place, depending on whether or not a disciplined modernization effort exists, the level of architecture governance in place, and rate and means by which new technologies are introduced into the environment.  All of these factors call for a thoughtful portfolio strategy, to identify critical business capabilities and ensure the technology solutions meant to enable them are modern, configurable, rationalized, and integrated effectively from an enterprise perspective.

Leverage Data and Insights, Then Optimize

With analytics and insights being a critical capability to differentiated business performance, an effective data governance program with business stewardship, selecting the right core, standard data sets to enable purposeful, actionable analytics, and process performance data associated with orchestrated workflows are critical components of any future IT infrastructure.  This is not all data, it’s the subset that creates significant business value to justify the investment in making it actionable. As process performance data is gathered through the orchestration approach, analytics can be performed to look for opportunities to evolve processes, configurations, rules, and other characteristics of the environment based on key business metrics to improve performance over time.     

Monitor and Manage

With the expansion of technologies and components, internal and external to the enterprise environment, having the ability to monitor and detect issues, proactively take action, and mitigate performance, security, or availability issues will become increasingly important.  Today’s tools are too fragmented and siloed to achieve the level of holistic understanding that is needed between hosted and cloud-based environments, including internal and external security threats in the process.

Secure “Everything”

While zero trust and vulnerability management risk is expanding at a rate that exceeds an organization’s ability to mitigate it, treating security as a fundamental requirement of current and future IT environments is a given.  The development of a purposeful cyber strategy, prioritizing areas for tooling and governance effectively, and continuing to evolve and adapt that infrastructure will be core to the DNA of operating successfully in any organization.  Security is not a nice to have, it’s a requirement.

The Role of Standards and Governance

What makes the framework-driven environment of the future work is ultimately having meaningful standards and governance, particularly for data and integration, but extending into application and data architecture, along with how those environments are constructed and layered to facilitate evolution and change over time.  Excellence takes discipline and, while that may require some additional investment in cost and time during the initial and ongoing stages of delivery, it will easily pay itself off in business agility, operating cost/ cost of ownership, and risk/exposure to cyber incidents over time.

The Lending Example

Having spent time a number of years ago understanding and developing strategy in the consumer lending domain, the similarities in process between direct and indirect lending, prime and specialty / sub-prime, from simple products like credit card to more complex ones like mortgage is difficult to ignore.  That being said, it isn’t unusual for systems to exist in a fairly siloed manner, from application to booking, from document preparation, into the servicing process itself.

What’s interesting, from my perspective, is where the differentiation actually exists across these product sets: in the rules and workflow being applied across them, while the underlying functions themselves are relatively the same.  As an example, one thing that differentiates a lender is their risk management policy, not necessarily the tool they use to assess to implement their underwriting rules or scoring models per se.  Similarly, whether pulling a credit score is part of the front end of the process in something like credit card and an intermediate step in education lending, having a configurable workflow engine could enable origination across a diverse product set with essentially the same back-end capabilities and likely at a lower operating cost.

So why does it matter?  Well, to the degree that the focus shifts from developing core components that implement relatively commoditized capability to the rules and processes that enable various products to be delivered to end consumers, the speed with which products can be developed, enhanced, modified, and deployed should be significantly improved.

Ok, Sounds Great, But Now What?

It Starts with Culture

At the end of the day, even the best designed solutions come down to culture.  As I mentioned above, excellence takes discipline and, at times, patience and thoughtfulness that seems to contradict the speed with which we want to operate from a technology (and business) standpoint.  That being said, given the challenges that ultimately arise when you operate without the right standards, discipline, and governance, the outcome is well worth the associated investments.  This is why I placed courageous leadership as the first pillar in the five dimensions outlined in my article on Excellence by DesignLeadership is critical and, without it, everything else becomes much more difficult to accomplish.

Exploring the Right Operating Model

Once a strategy is established to define the desired future state and a culture to promote change and evolution is in place, looking at how to organize around managing that change is worth consideration.  I don’t necessarily believe in “all in” operating approaches, whether it is a plan/build/run, product-based orientation, or some other relatively established model.  I do believe that, given leadership and adaptability are critically needed for transformational change, looking at how the organization is aligned to maintaining and operating the legacy environment versus enabling establishment and transition to the future environment is something to explore.  As an example, rather than assuming a pure product-based orientation, which could mushroom into a bloated organization design where not all leaders are well suited to manage change effectively, I’d consider organizing around a defined set of “transformation teams” that operate in a product-oriented/iterative model, but basically take on the scope of pieces of the technology environment, re-orient, optimize, modernize, and align them to the future operating model, then transition those working assets to different leaders that maintain or manage those solutions in the interest of moving to the next set of transformation targets.  This should be done in concert with looking for ways to establish “common components” teams (where infrastructure like cloud platform enablement can be a component as well) that are driven to produce core, reusable services or assets that can be consumed in the interest of ultimately accelerating delivery and enabling wider adoption of the future operating model for IT.

Managing Transition

One of the consistent challenges with any kind of transformative change is moving between what is likely a very diverse, heterogenous environment to one that is standards-based, governed, and relatively optimized.  While it’s tempting to take on too much scope and ultimately undermine the aspirations of change, I believe there is a balance to be struck in defining and establishing some core delivery capabilities that are part of the future infrastructure, but incrementally migrating individual capabilities into that future environment over time.  This is another case where disciplined operations and disciplined delivery come into play so that changes are delivered consistently but also in a way that is sustainable and consistent with the desired future state.

Wrapping Up

While a certain level of evolution is guaranteed as part of working in technology, the primary question is whether we will define and shape that future or be continually reacting and responding to it.  My belief is that we can, through a level of thoughtful planning and strategy, influence and shape the future environment to be one that enables rapid evolution as well as accelerated integration of best-of-breed capabilities at a pace and scale that is difficult to deliver today.  Whether we’ll truly move to a full producer/consumer type environment that is service based, standardized, governed, orchestrated, fully secured, and optimized is unlikely, but falling short of excellence as an aspiration would still leave us in a considerably better place than where we are today… and it’s a journey worth making in my opinion.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 03/08/2024

On “Delivering at Speed”

Context

In my article on Excellence by Design, the fifth dimension I reference is “Delivering at Speed”, basically understanding the delicate balance to be struck in technology with creating value on a predictable, regular basis while still ensuring quality.

One thing that is true is software development is messy and not for the faint of heart or risk averse.  The dynamics of a project, especially if you are doing something innovative, tend to be in flux at a level that requires you to adapt on the fly, make difficult decisions, accept tradeoffs, and abandon the idea that “perfection” even exists.  You can certainly try to design and build out an ivory tower concept, but the probability is that you’ll never deliver it or, in the event you do, that it will take you so long to make it happen that your solution will be obsolete by the time you finally go live.

To that end, this article is meant to share a set of delivery stories from my past.  In three cases, I was told the projects were “impossible” or couldn’t be delivered at the outset.  In the other two, the level of complexity or size of the challenge was relatively similar, though they weren’t necessarily labeled “impossible” at any point.  In all cases, we delivered the work.  What will follow, in each case, are the challenges we faced, what we did to address them, and what I would do differently if the situation happened again.  Even in success, there is ample opportunity to learn and improve.

Looking across these experiences, there is a set of things I would say apply in nearly all cases:

  1. Commitment to Success
    • It has to start here, especially with high velocity or complex projects. When you decide you’re going to deliver day one, every obstacle is a problem you solve, and not a reason to quit.  Said differently, if you are continually looking for reasons to fail, you will.
  2. Adaptive Leadership
    • Change is part of delivering technology solutions. Courageous leadership that embraces humility, accepts adversity, and adapts to changing conditions will be successful far more than situations where you hold onto original assumptions beyond their usefulness
  3. Business/Technology Collaboration
    • Effective communication, a joint investment in success, and partnership make a significant difference in software delivery. The relationships and trust it takes can be an achievement in itself, but the quality of the solution and ability to deliver is definitely stronger where this is in place
  4. Timely Decision Making
    • As I discuss in my article On Project Health and Transparency, there are “points of inflection” that occur on projects of any scale. Your ability to respond, pivot, and execute in a new direction can be a critical determinant to delivering
  5. Allowing for the Unknown
    • With any project of scale or reasonable complexity, there will be pure “unknown” (by comparison with “known unknown”) that is part of the product scope, the project scope, or both. While there is always a desire to deliver solutions as quickly as possible with the lowest level of effort (discussed in my article Fast and Cheap Isn’t Good), including some effort or schedule time proactively as contingency for the unknown is always a good idea
  6. Excessive Complexity
    • One thing that is common across most of these situations is that the approach to the solution was building a level of flexibility or capability that probably was beyond what was needed in practice. This is not unusual on new development, especially where significant funding is involved, because those bells and whistles create part of the appeal that justifies the investment to begin with.  That being said, if a crawl-walk-run type approach that evolves to a more feature-rich solution is possible, the risk profile for the initial efforts (and associated cost) will likely be reduced substantially.  Said differently, you can’t generate return on investment for a solution you never deliver

The remainder of this article is focused on sharing a set of these delivery stories.  I’ve purposefully reordered and made them a bit abstract in the interest of maintaining a level of confidentiality on the original efforts involved.  In practice, these kinds of things happen on projects all the time, the techniques referenced are applicable in many situations, and the specifics aren’t as important.

 

Delivering the “Impossible”

Setting the Stage

I’ll always remember how this project started. I had finished up my previous assignment and was wondering what was next.  My manager called me in and told me about a delivery project I was going to lead, that it was for a global “shrink wrap” type solution, that a prototype had been developed, that I needed to design and build the solution… but not to be too concerned because the timeframe was too short, the customer was historically very difficult to work with, and there was “no way to deliver the project on time”.  Definitely not a great moment for motivating and inspiring an employee, but my manager was probably trying to manage my expectations given the size of the challenge ahead.

Some challenges associated with this effort:

  1. The nature of what was being done, in terms of automating an entirely manual process had never been done before. As such, the requirements didn’t exist at the outset, beyond a rudimentary conceptual prototype that demonstrated the desired user interface behavior
  2. I was completely unfamiliar with the technology used for the prototype and needed to immediately assess whether to continue forward with it or migrate into a technology I knew
  3. The timeframe was very aggressive and the customer was notorious for not delivering on time
  4. We needed everything we developed to fit on a single 3.5-inch diskette for distribution, and it was not a small application to develop

What Worked

Regardless of any of the mechanics, having been through a failed delivery early in my career, my immediate reaction to hearing the project was doomed from the outset was that there was no way we were going to allow that to happen.

Things that mattered in the delivery:

  1. Within the first two weeks, I both learned the technology that was used for the prototype and was able to rewrite it (it was a “smoke and mirrors” prototype) into a working, functional application. Knowing that the underlying technology could do what was needed in terms of the end user experience, we took the learning curve impact in the interest of reducing the development effort that would have been required to try and create a similar experience using other technologies we were using at the time
  2. Though we struggled initially, eventually we brought a leader from the customer team to our office to work alongside us (think Agile before Agile) so we could align requirements with our delivery iterations and produce application sections for testing in relatively complete form
  3. We had to develop everything as compact as possible given the single disk requirement so the distribution costs wouldn’t escalate (10s of thousands of disks were being shipped globally)

All of the above having helped, the thing that made the largest difference was the commitment of the team (ultimately me and three others) to do whatever was required to deliver.  The brute force involved was substantial, and we worked increasing hours, week-after-week, until we pulled seven all night sessions in the last ten days leading up to shipping the software to the production company.  It was an exceptionally difficult pace to sustain, but we hit the date, and the “impossible” was made possible.

What I Would Change

While there is a great deal of satisfaction that comes from meeting a delivery objective, especially an aggressive one, there are a number of things I would have wanted to do differently in retrospect:

  1. We grew the team over time in a way that created additional pressure in the latter half of the project. Given we started with no requirements and were doing something that had never been done, I’m not sure how we could have estimated the delivery to know we needed more help sooner, but minimally, from a risk standpoint, there was too much work spread too thinly for too long, and it made things very challenging later on to catch up
  2. As I mentioned above, we eventually transitioned to have an integrated business/technology team that delivered the application with tight collaboration. This should have happened sooner, but we collectively waited until it became a critical issue before it escalated to a level than anyone really addressed it.  That came when we actually ran out of requirements late one night (it was around 2am) to the point that we needed to stop development altogether.  The friction this created between the customer and development team was difficult to work through and is something the change in approach made much better, just too late in the project
  3. From a software standpoint, given it was everyone on the team’s first foray into new technology (starting with me), there was a lot we could have done to design the solution better, but we were unfortunately learning on the job. This is another one that I don’t know how we could have offset, beyond bringing in an expert developer to help us work through the design and sanity check our work, but it was such new technology at the time, that I don’t know that was really a viable option or that such expertise was available
  4. This was also my first global software solution and I didn’t appreciate the complexities of localization enough to avoid making some very basic mistakes that showed up late in the delivery process.

 

Working Outside the Service Lines

Setting the Stage

This project honestly was sort of an “accidental delivery”, in that there was no intention from a services standpoint to take on the work to begin with.  Similar to my product development experience, there was a customer need to both standardize and automate what was an entirely manual process.  Our role, and mine in particular, was to work with the customer, understand the current process across the different people performing it, then look for a way to both standardize the workflow itself (in a flexible enough way that everyone could be trained and follow that new process), then to define the opportunities to automate it such that a lot of the effort done in spreadsheets (prone to various errors and risks) could be built into an application that would make it much easier to perform the work.

The point of inflection came when we completed the process redesign and, with no implementation partner in place (and no familiarity with the target technologies in the customer team), the question became “who is going to design and build this solution?”  Having a limited window of time, a significant amount of seasonal business that needed to be processed, and the right level of delivery experience, I offered to shift from a business analyst to the technology lead on the project.  With a substantial challenge ahead, I was again told what we were trying to do could never be done in the time we had.  Having had previous success in that situation, I took it as a challenge to figure out what we needed to do to deliver.

Some challenges associated with this effort:

  1. The timeframe was the biggest challenge, given we had to design and develop the entire application from scratch. The business process was defined, but there was no user interface design, it was built using relatively new technology, and we needed to provide the flexibility users were used to having in Excel while still enforcing a new process
  2. Given the risk profile, the customer IT manager assumed the effort would fail and consequently provided only a limited amount of support and guidance until the very end of the project, which created some integration challenges with the existing IT infrastructure
  3. Finally, given that there were technology changes occurring in the market as a whole, we encountered a limitation in the tools (given the volume of data we were processing) that nearly caused us to hit a full stop mid-development

What Worked

Certainly, an advantage I had coming into the design and delivery effort was that I helped develop the new process and was familiar with all the assumptions we made during that phase of the work.  In that respect, the traditional disconnect between “requirements” and “solution” was fairly well mitigated and we could focus on how to design the interface, not the workflow or data required across the process.

Things that mattered in the delivery:

  1. One major thing that we did well from the outset was work in a prototype-driven approach, engaging with the end customer, sketching out pieces of the process, mocking them up, confirming the behavior, then moving onto the next set of steps while building the back end of the application offline. Given we only had a matter of months, the partnership with the key business customer and their investment in success made a significant difference in the efficiency of our delivery process (again, very Agile before Agile)
  2. Despite the lack of support from customer IT leadership standpoint, a key member of their team invested in the work, put in a tremendous amount of effort, and helped keep morale positive despite the extreme hours we worked for essentially the entire duration of the project
  3. While not as pleasant, another thing that contributed to our success was managing performance actively. Wanting external expertise (and needing the delivery capacity), we pulled in additional contracting help, but had inconsistent experience with the commitment level of the people we brought in.  Simply said: you can’t deliver high velocity project with a half-hearted commitment.  It doesn’t work.  The good news is that we didn’t delay decisions to pull people where the contributions weren’t where they needed to be
  4. On the technology challenges, when serious issues arose with our chosen platform, I took a fairly methodical approach to isolating and resolving the infrastructure issues we had. The result was a very surgical and tactical change to how we deployed the application without needing to do a more complex (and costly) end user upgrade that initially appeared to be our only option

What I Would Change

While the long hours and months without a day off ultimately enabled us to deliver the project, there were certainly learnings from this effort that I took away despite our overall success.

Things I would have wanted to do differently in retrospect:

  1. While the customer partnership was very effective overall, one area that where we didn’t engage early enough was with the customer analytics organization. Given the large volume of data, heavy reliance on computational models, and the capability for users to select data sets to include in the calculations being performed, we needed more support than expected to verify our forecasting capabilities were working as expected.  This was actually a gap in the upstream process design work itself, as we identified the desired capability (the “feature”) and where it would occur within the workflow, but didn’t flesh out the specific calculations (the “functionality”) that needed to be built to support it.  As a result, we had to work through those requirements during the development process itself, which was very challenging
  2. From a technology standpoint, we assumed a distributed approach for managing data associated with the application. While this reduced the data footprint for individual end users and simplified some of the development effort, it actually made the maintenance and overall analytics associated with the platform more complex.  Ultimately, we should have centralized the back end of the application.  This is something that was done subsequent to the initial deployment, though I’m not certain if we would have been able to take that approach with the initial release and still made the delivery date
  3. From a services standpoint, while I had the capability to lead the design and delivery of the application, the work itself was outside the core service offerings of our firm. Consequently, while we delivered for the customer, there wasn’t an ability to leverage the outcome for future work, which is important in consulting in building your business.  In retrospect, while I wouldn’t have learned and gotten the experience, we should have engaged a partner in the delivery and played a different role in implementation

 

Project Extension

Setting the Stage

Early in my experience of managing projects, I had the opportunity to take on an effort where the entire delivery team was coming off a very difficult, long project.  I was motivated and wanted to deliver, everyone else was pretty tired.  The guidance I received at the outset was not to expect very much, and that the road ahead was going to be very bumpy.

Some challenges associated with this effort:

  1. As I mentioned above, the largest challenge was a lack of motivation, which was a strength in other high pressure deliveries I’d encountered before. I was unused to dealing with it from a leadership standpoint and didn’t address it as effectively as I should have
  2. From a delivery standpoint, the technical solution was fairly complex, which made the work and testing process challenging, especially in the timeframe we had for the effort
  3. At a practical level, the team was larger than I had previous experience leading. Leading other leaders wasn’t something I had done before, which led me to making all the normal mistakes that comes with doing so for the first time, which didn’t help on either efficiency or sorely needed motivation

What Worked

While the project started with a team that was already burned out, the good news is that the base application was in place, the team understood the architecture, and the scope was to buildout existing capabilities on top of a reasonably strong foundation.  There was a substantial amount of work to be performed in a relatively short timeframe, but the good news is that we weren’t starting from scratch and there was recent evidence that the team could deliver.

Things that mattered in the delivery:

  1. The client partnership was strong, which helped both in addressing requirements gaps and, more importantly, in performing customer testing in both an efficient and effective manner given the accelerated timeframe
  2. At the outset of the effort, we revisited the detailed estimates and realigned the delivery team to balance the work more effectively across sub-teams. While this required some cross-training, we reduced overall risk in the process
  3. From a planning standpoint, we enlisted the team to try out an aggressive approach where we set all the milestones slightly ahead of their expected delivery date. Our assumption was that, by trying to beat our targets, we could create some forward momentum that would create “effort reserve” to use for unexpected issues and defects later in the project
  4. Given the pace of the work and size of the delivery team, we had the benefit of strong technical leads who helped keep the team focused and troubleshoot issues as and when we encountered them

What I Would Change

Like other projects I’m covering in this article, the team put in the effort to deliver on our commitments, but there were definitely learnings that came through the process.

Things I would have wanted to do differently in retrospect:

  1. Given it was my first time leading a larger team under a tight timeline, I pushed where I should have inspired. It was a learning experience that I’ve used for the benefit of others many times since.  While I don’t know what impact it might have had on the delivery itself, it might have made the experience of the journey better overall
  2. From a staffing standpoint, we consciously stuck to the team that helped deliver the initial project. Given the burnout was substantial and we needed to do a level of cross-training anyway, it might have been a good idea for us to infuse some outside talent to provide fresh perspective and much needed energy from a development standpoint
  3. Finally, while it was outside the scope of work itself, this project was an example of a situation I’ve encountered a few times over the years where the requirements of the solution and its desired capabilities were overstated and translated into a lot of complexity in architecture and design. My guess is that we built a lot of flexibility that wasn’t required in practice

 

Modernization Program

Setting the Stage

What I think of as another “impossible” delivery came with a large-scale program that started off with everyone but the sponsors assuming it would fail.

Some challenges associated with this effort:

  1. The largest thing stacked against us was two failed attempts to deliver the project in the past, with substantial costs associated with each. Our business partners were well aware of those failures, some having participated in them, and the engagement was tentative at best when we started to move into execution
  2. We also had significant delivery issues with our primary technology partner that resulted in them being transitioned out mid-implementation. Unfortunately, they didn’t handle the situation gracefully, escalated everywhere, told the CIO the project would never be successful, and the pressure on the team to hit the first release on schedule was increased by extension
  3. From an architecture standpoint, the decision was made to integrate new technology with existing legacy software wherever possible, which added substantial development complexity
  4. The scale and complexity for a custom development effort was very significant, replacing multiple systems with one new, integrated platform, and the resulting planning and coordination was challenging
  5. Given the solution replaced existing production systems, there was a major challenge in keeping capabilities in sync between the new application and ongoing enhancements being implemented in parallel by the legacy application delivery team

What Worked

Things that mattered in the delivery:

  1. As much as any specific decision or “change”, what contributed to the ultimate success of the program was our continuous evolution of the approach as we encountered challenges. With a program of the scale and complexity we were addressing, there was no reasonable way to mitigate the knowledge and requirements risks that existed at the outset.  What we did exceptionally well was to pivot and work through obstacles as they appeared… in architecture, requirements, configuration management, and other aspects of the work.  That adaptive leadership was critical in meeting our commitments and delivering the platform
  2. The decision to change delivery partners was a significant disruption mid-delivery that we managed with a weekly transition management process to surface and address risks and issues on an ongoing basis. The governance we applied was very tight across all the touchpoints into the program and it helped us ultimately onboard the new partner and reduce risk on the first delivery date, which we ultimately met
  3. To accelerate overall development across the program, we created both framework and common components teams, leveraging reuse to help reduce risk and effort required in each of the individual product teams. While there was some upfront coordination to decide how to arbitrate scope of work, we reduced the overall effort in the program substantially and could, in retrospect, have built even more “in common” than we did
  4. Finally, to keep the new development in sync with the current production solutions, we integrated the program with ongoing portfolio management processes from work-intake and estimation through delivery as if we were already in production. This helped us avoid rework that would have come if we had to retrofit those efforts post-development in the pre-production stage of the work

The net result of a lot of adjustments and a very strong, committed set of delivery teams was that we met our original committed launch date and moved into the broader deployment of the program.

What I Would Change

The learnings from a program of this scale could constitute an article all on their own, so I’ll focus on a subset that were substantial at an overall level.

Things I would have wanted to do differently in retrospect:

  1. As I mentioned in the point on common components above, the mix between platform and products wasn’t right. Our development leadership was drawn from the legacy systems, which helped, given they were familiar with the scope and requirements, but the downside was that the new platform ended up being siloed in a way that mimicked the legacy environment.  While we started to promote a culture of reuse, we could have done a lot more to reduce scope in the product solutions and leverage the underlying platform more
  2. Our product development approach should have been more framework-centric, being built towards broader requirements versus individual nuances and exceptions. There was a considerable amount of flexibility architected into the platform itself, but given the approach was focused on implementing every requirement as if it was an exception, the complexity and maintenance cost of the resulting platform was higher than it should have been
  3. From a transition standpoint, we should have replaced our initial provider earlier, but given the depth and nature of their relationships and a generally risk-averse mindset, we gave them a matter of months to fail, multiple times, before making the ultimate decision to change. Given there was a substantial difference in execution once we completed transition, we waited longer than we should have
  4. Given we were replacing multiple existing legacy solutions, there was a level of internal competition that was unhealthy and should have been managed more effectively from a leadership standpoint. The impact was that there were times the legacy teams were accelerating capabilities on systems we knew were going to be retired in what appeared to be an effort to undermine the new platform

 

Project Takeover

Setting the Stage

We had the opportunity in consulting to bid on a development project from our largest competitor that was stopped mid-implementation.  As part of the discovery process, we received sample code, testing status, and the defect log at the time the project was stopped.  We did our best to make conservative assumptions on what we were inheriting and the accuracy of what we received, understanding we were in a bidding situation and had to lean into discomfort and price the work accordingly.  In practice, the situation we took over was far worse than expected.

Some challenges associated with this effort:

  1. While the quality of solution was unknown at the outset, we were aware of a fairly high number of critical defects. Given the project didn’t complete testing and some defects were likely to be blocking the discovery of others, we decided to go with a conservative assumption that the resulting severe defect count could be 2x the set reported to us.  In practice the quality was far worse and there were 6x more critical defects than were reported to us at the bidding stage
  2. In concert with the previous point, while the testing results provided a mixed sense of progress, with some areas being in the “yellow” (suggesting a degree of stability) and others in the “red” (needing attention), in practice, the testing regimen itself was clearly not thorough and there wasn’t a single piece of the application that was better than a “red” status, most of them more accurately “purple” (restart from scratch), if such a condition even existed.
  3. Given the prior project was stopped, there was a very high level of visibility, and the expectation to pick up and use what was previously built was unrealistic to a large degree, given the quality of work was so poor
  4. Finally, there was considerable resource contention with the client testing team not being dedicated to the project and, consequently, it became very difficult to verify the solution as we progressed through stabilizing the application and completing development

What Worked

While the scale and difficulty of the effort was largely underrepresented at the outset of the work, as we dug in and started to understand the situation, we made adjustments that ultimately helped us stabilize and deliver the program.

Things that mattered in the delivery:

  1. Our challenges in testing aside, we had the benefit of a strong client partnership, particularly in program management and coordination, which helped given the high level of volatility we had in replanning as we progressed through the effort
  2. Given we were in a discovery process for the first half of the project, our tracking and reporting methods helped manage expectations and enable coordination as we continued to revise the approach and plan. One specific method we used was showing the fix rate in relation to the level of undiscovered defects and then mapping that additional effort directly to the adjusted plan.  When we visibly accounted for it in the schedule, it helped build confidence that we actually were “on plan” where we had good data and making consistent progress where we had taken on additional, unforeseen scope as well.  Those items were reasonably outside our control as a partner, so the transparency helped us given the visibility and pressure surrounding the work was very high
  3. Finally, we mitigated risk in various situations by making the decision to rewrite versus fix what was handed over at the outset of the project. The code quality being as poor as it was and requirements not being met, we had to evaluate whether it was easier to start over and work with a clean slate versus trying to reverse engineer something we knew didn’t work.  These decisions helped us reduce effort and risk, and ultimately deliver the program

What I Would Change

Things I would have wanted to do differently in retrospect:

  1. As is probably obvious, the largest learning was that we didn’t make conservative enough assumptions in what we were inheriting with the project, the accuracy of testing information provided, or the code samples being “representative” of the entire codebase. In practice, though, had we estimated the work properly and attached the actual cost for doing the project, we might not have “sold” the proposal either…
  2. We didn’t factor changing requirements into our original estimates properly, partially because we were told the project was mid-testing, and largely built prior to our involvement. This added volatility into the project as we already needed to stabilize the application without realizing the requirements weren’t frozen.  In retrospect, we should have done a better job probing on this during the bidding process itself
  3. Finally, we had challenges maintaining momentum where a dedicated client testing team would have made the iteration process more efficient. It may have been necessary to lean on augmentation or a partner to help balance ongoing business and the project, but the cost of extending the effort was substantial enough that it likely was worth investigating

 

Wrapping Up

As I said at the outset, having had the benefit of delivering a number of “impossible” projects over the course of my career, I’ve learned a lot about how to address the mess that software development can be in practice, even with disciplined leadership.  That being said, the great thing about having success is that it also tends to make you a lot more fearless the next time a challenge comes up, because you have an idea what it takes to succeed under adverse conditions.

I hope the stories were worth sharing.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 02/07/2023

Fast and Cheap, Isn’t Good…

Overview

Having touched on the importance of quality in accelerating value in my latest article Creating Value Through Strategy, I wanted to dive a little deeper into the topic of “speed versus quality”.

For those who may be unfamiliar, there is a general concept in project delivery that the three primary dimensions against which you operate are good (the level of effort you put into ensuring a product is well architected and meets functional and non-functional requirements at the time of delivery), fast (how quickly or often you produce results), and cheap (your ability to deliver the product/solution at a reasonable cost). 

The general assumption is that the realities of delivery lead you to having to prioritize two of the three (e.g., you can deliver a really good product fast, but it won’t be cheap; or you can deliver a really good product at a low cost, but it will take a lot of time [therefore not be “fast”]).  What this translates to, in my experience, has nearly always been that speed and cost are prioritized highest, with quality being the item compromised.

Where this becomes an issue is in the nature of the tradeoff that was made and the longer-term implications of those decisionsQuality matters.  My assertion is that, where quality is compromised, “cheap” is only true in the short-term and definitely not the case overall.

The remainder of this article will explore several dimensions to consider when making these decisions.  This isn’t to say that there aren’t cases where there is a “good enough” level of quality to deliver a meaningful or value-added product or service.  My experience, however, has historically been that the concepts like “we didn’t have the time” or “we want to launch and learn” are often used as a substitute for discipline in delivery and ultimately undermine business value creation.

Putting Things in Perspective

Dimensions That Matter

I included the diagram above to put how I think of product delivery into perspective.  In the prioritization of good, fast, and cheap, what often occurs is that too much focus and energy goes into the time spent getting a new capability or solution to market, but not enough on what happens once it is there and the implications of that.  The remainder of this section will explore aspects of that worth considering in the overall context of product/solution development.

Some areas to consider in how a product is designed and delivered:

  • Architecture
    • Is the design of the solution modular and component- or service-based? This is important to the degree that capabilities may emerge over time that surpass what was originally delivered and, in a best-of-breed environment, you would ideally like to be able to replace part of a solution without having to fundamentally rearchitect or materially refactor the overall solution
    • Does the solution conform to enterprise standards and guidelines? I’ve seen multiple situations where concurrent, large-scale efforts were designed and developed without consideration for their interoperability and adherence to “enterprise” standards.  By comparison, developing on a “program-“ or “project-level”, or in working with a monolithic technology/solution (e.g., with a relatively closed ERP system), creates technology silos that lead to a massive amount of technical debt as it is almost never the case that there is leadership appetite for refactoring or rewriting core aspects of those solutions over time
    • Is the solution cloud-native and does it support containerization to enable deployment of workloads across public and private clouds as well as the edge? In the highly complex computing environments of today, especially in industries like Manufacturing, the ability to operate and distribute solutions to optimize availability, performance, and security (at a minimum) is critical.  Where these dimensions aren’t taken into account, there would likely be almost an immediate need for modernization to offset the risk of technology obsolescence at some point in the next year or two  
  • Security
    • Does the product or service leverage enterprise technologies and security standards? Managing vulnerabilities and migrating towards “zero trust” is a critical aspect of today’s technology environment, especially to the degree that workloads are deployed on the public cloud.  Where CI/CD pipelines are developed as part of a standard cloud platform strategy with integrated security tooling, the enterprise level ability to manage, monitor, and mitigate security risk will be significant improved
  • Integration
    • Does the product or service leverage enterprise technologies and integration standards? Interoperability with other internal and external systems, as well as your ability to introduce and leverage new capabilities and rationalize redundant solutions over time is fundamentally dependent on the manner in which applications are architected, designed, and integrated with the rest of a technology footprint.  Having worked in environments with well-defined standards and strictly enforced governance versus ones where neither were in place, the level of associated complexity and costs in the ultimate operating environments was materially different
  • Data Standards
    • Does the product or service align to overall master data requirements for the organization? Master data management can be a significant challenge from a data governance standpoint, which is why giving this consideration up front in a product development lifecycle is extremely important.  Where it isn’t considered in design, the end result could be master data that doesn’t map or align to other hierarchies in place, complicating integration and analytics intended to work across solutions and the “cleanup” required of data stewards (to the degree that they are in place) could be expensive and difficult post-deployment
    • Are advanced analytics aspirations taken into account in the design process itself? This is an area becoming increasingly important given AI-enabled (“intelligent”) applications as discussed in my article on The Intelligent Enterprise.  Designing with data standards in mind and an eye towards how it will be used to enable and drive analytics, likely in concert with data in other adjacent or downstream systems is a step that can save considerable effort and cost downstream when properly addressed early in the product development cycle
  • “Good Enough”/Responsive Architecture
    • All the above points noted, I believe architecture needs to be appropriate to the nature of the solution being delivered. Having worked in environments where architecture standards were very “ivory tower”/theoretical in nature and made delivery extremely complex and costly versus ones where architecture was ignored and the delivery environment was essentially run with an “ask for forgiveness” or cowboy/superhero mentality, the ideal state in my mind should be somewhere in between, where architecture is appropriate to the delivery circumstances, but also mindful of longer-term implications of the solution being delivered so as to minimize technical debt and further interoperability in a connected enterprise ecosystem environment.

Thinking Total Cost of Ownership

What makes product/software development challenging is the level of unknowns that exist.  At any given time, when estimating a new endeavor, you have the known, the known unknown, and the complete unknown (because what you’re doing is outside your team’s collective experience).  The first two components can be incorporated into an estimation model that can be used for planning and the third component can be covered through some form of “contingency” load that is added to an estimate to account for those blind spots to a degree.

Where things get complicated is, once execution begins, the desire to meet delivery commitments (and the associated pressure thereof) can influence decisions being made on an ongoing basis.  This is complicated by the normal number of surprises that occur during any delivery effort of reasonable scale and complexity (things don’t work as expected, decisions or deliverables are delayed, requirements become increasingly clear over time, etc.).  The question is whether a project has both disciplined, courageous leadership in place and the appropriate level of governance to make sure that, as decisions need to made in the interest of arbitrating quality, cost, time, and scope, that they are done with total cost of ownership in mind.

As an example, there was a point in the past where I encountered a large implementation program ($100MM+ in scale) with a timeline of over a year to deploy an initial release.  During the project, the team announced that all the pivotal architecture decisions needed to be made within a one-week window of time, suggesting that the “dates wouldn’t be met” if that wasn’t done.  That logic was then used at a later point to decide that standards shouldn’t be followed for other key aspects of the implementation in the interest of “meeting delivery commitments”.  What was unfortunate in this situation was that, not only were good architecture and standards not implemented, the project encountered technical challenges (likely due to one or two of those root causes, among other things) that caused it to be delivered over a year late regardless.  The resulting solution was more difficult to maintain, integrate, scale, or leverage for future business needs.  In retrospect, was any “speed” obtained through that decision making process and the lack of quality in the solution?  Certainly not, and this situation unfortunately isn’t unique to larger scale implementations in my experience.  In these cases, the ongoing run rate of the program itself can become an excuse to make tactical decisions that ultimately create a very costly and complex solution to manage and maintain in the production environment, none of which anyone typically wants to remediate or rewrite post-deployment.

So, given the above example, the argument could be made that the decisions were a result of inexperience or pure unknowns that existed when the work was estimated and planned to begin with, which is a fair point.  Two questions come to mind in terms of addressing this situation:

  • Are ongoing changes being reviewed through a change control process in relation to project cost, scope, and deadline, or are the longer-term implications in terms of technical debt and operating cost of ownership also considered? Compromise is a reality of software delivery and there isn’t a “perfect world” situation pretty much ever in my experience.  That being said, these choices should be conscious ones, made with full transparency and in a thoughtful manner, which is often not the case, especially when the pressures surrounding a project are high to begin with.
  • Are the “learnings” obtained on an ongoing basis factored into the estimation and planning process so as to mitigate future needs to compromise quality when issues arise? Having been part of and worked closely with large programs over many years, there isn’t a roadmap that ever plays out in practice how it is drawn up on paper at the outset.  That being said, every time the roadmap is revised, as pivot points in the implementation are reached and plans adjusted, are learnings being incorporated such that mistakes or sacrifices to quality aren’t being repeated over and over again.  This is a tangible thing that can be monitored and governed over time.  In the case of Agile-driven efforts, it would be as simple as looking for patterns in the retrospectives (post-sprint) to see whether the process is improving or repeating the same mistakes (a very correctible situation with disciplined delivery leadership)

Speed on the Micro- Versus Macro-Scale

I touched on this somewhat in the previous point, but the point to call out here is that tactical decisions made in the interest of compromising quality for the “upcoming release” can and often do create technical issues that will ultimately make downstream delivery more difficult (i.e., slower and more costly).

As an example, there was a situation in the past where a team integrated technology from multiple vendors that provided the same underlying capability (i.e., the sourcing strategy didn’t have a “preferred provider”, so multiple buys were done over time using different partners, sometimes in parallel).  In each case, the desire from the team was to deliver solutions as rapidly as possible in the interest of “meeting customer demand” and they were recognized and rewarded for doing so at speed.  The problem with this situation was that the team perceived standards as an impediment to the delivery process and, therefore, either didn’t leverage any or did so on a transactional or project-level basis.  Where this became problematic was where there became a need to:

  • Replace a given vendor – other partners couldn’t be leveraged because they weren’t integrated in a common way
  • Integrate across partners – the technology stack was different and defined unique to each use case
  • Run analytics across solutions – data standards weren’t in place so that underlying data structures were in a common format

The point of sharing the example is that, at a micro-level, the team’s approach seems fast, cheap, and appropriate.  The accumulation of the technical debt, however, is substantial when you scale and operate under that mindset for an extended period of time, and it does both limit your ability to leverage those investments, migrate to new solutions, introduce new capabilities quickly and effectively, and integrate across individual point solutions where needed.  Some form of balance should be in place to optimize the value created and cost of ownership over time.  Without it, the technical debt will undermine the business value in time.

Consulting Versus Corporate Environments

Having worked in both corporate and consulting environments, it’s interesting to me that there can be a different perspective on quality depending on where you sit (and the level of governance in place) in relation to the overall delivery.

Generally speaking, it’s somewhat common on the corporate side of the equation to believe that consultants lack the knowledge of your systems and business to deliver solutions you could yourself “if you had the time”.  By contrast, on the consulting side, ideally, you believe that clients are thinking of you as a “hired gun” when it comes to implementations, because you’re bringing in necessary skills and capacity to deliver on something they may not have the experience or bench strength to deliver on their own.

So, with both sides thinking they know more than the other and believing they are capable of doing a quality job (no one does a poor job on purpose), why is quality so often left unattended on larger scale efforts?  

On this point:

  • The delivery pressures and unknowns I mentioned above apply regardless of who is executing a project.
  • A successful delivery in many cases requires a blend of internal and external resources (to the extent they are being leveraged) so there is a balance of internal knowledge and outside expertise to deliver the best possible solution from an objective standpoint.
  • Finally, you can’t deliver to standards of excellence that aren’t set. I’ve seen and worked in environments (both as a “client” and as a consultant) where there were very exacting standards and expectations of quality and ones where quality wasn’t governed at the level it should be

I didn’t want to belabor this aspect of delivery, but it is interesting how the perspective and influence over quality decisions can be different depending on one’s role in the delivery process (client, consultant, or otherwise).

Wrapping Up

Bringing things back to the overall level, the point of writing this article was to provide some food for thought on the good, fast, cheap concept and the reality that, in larger and more complex delivery situations, the cost of speed isn’t always evaluated effectively.  There is no “perfect world”, for certain, but having discipline, thinking through some of the dimensions above, and making sure the tradeoffs made are thoughtful and transparent in nature could help improve value/cost delivered over time.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 12/10/2023

Creating Value Through Strategy

Context

One of the things that I’ve come to appreciate over the course of time is the value of what I call “actionable strategy”.  By this, I mean a blend of the conceptual and practical, a framework that can be used to set direction and organize execution without being too prescriptive, while still providing a vision and mental model for leadership and teams to understand and align on the things that matter.

Without a strategy, you can have an organization largely focused on execution, but that tends to create significant operating or technical debt and complexity over time, ultimately having an adverse impact on competitive advantage, slowing delivery, and driving significant operating cost.  Similarly, a conceptual strategy that doesn’t provide enough structure to organize and facilitate execution tends to create little impact over time as teams don’t know how to apply it in a practical sense, or it can add significant overhead and cost in the administration required to map its strategic objectives to the actual work being done across the organization (given they aren’t aligned up front or at all).  The root causes of these situations can vary, but the important point is to recognize the criticality of an actionable business-aligned technology strategy and its role in guiding execution (and thereby the value technology can create for an organization).

In reality, there are so many internal and external factors that can influence priorities in an organization over time, that one’s ability to provide continuity of direction with clear conceptual outcomes (while not being too hung up on specific “tasks”) can be important in both creating the conditions for transformation and sustainable change without having to “reset” that direction very often.  This is the essence of why framework-centric thinking is so important in my mind.  Sustainable change takes time, because it’s a mindset, a culture, and way of operating.  If a strategy is well-conceived and directionally correct, the activities and priorities within that model may change, but the ability to continue to advance the organization’s goals and create value should still exist.  Said differently: Strategies are difficult to establish and operationalize.  The less you have to do a larger-scale reset of them, the better.  It’s also far easier to adjust priorities and activities than higher-level strategies, given the time it takes (particularly in larger organizations) to establish awareness of a vision and strategy.  This is especially true if the new direction represents a departure from what has been in place for some time.

To be clear, while there is a relationship between this topic and what I covered in my article on Excellence By Design, the focus there is more on the operation and execution of IT within an organization, not so much the vision and direction of what you’d ideally like to accomplish overall.

The rest of this article will focus on the various dimensions that I believe compromise a good strategy, how I think about them, and ways that they could create measurable impact.  There is nothing particularly “IT-specific” about these categories (i.e., this is conceptually akin to ‘better, faster, cheaper’) and I would argue they could apply equally well to other areas of a business, but differ in how they translate on an operating level.

In relation to the Measures outlined in each of the sections below, a few notes for awareness:

  • I listed several potential areas to consider and explore in each section, along with some questions that come to mind with each.
  • The goal wasn’t to be exhaustive or suggest that I’d recommend tracking any or all of them on an “IT Scorecard”, rather to provide some food for thought
  • My general point of view is that it’s better to track as little as possible from an “IT reporting” standpoint, unless there is intention to leverage those metrics to drive action and decisions. My experience with IT metrics historically is that they are overreported and underleveraged (and therefore not a good use of company time and resources).  I touch on some of these concepts in the article On Project Health and Transparency

Innovate

What It Is and Why It Matters

Stealing from my article on Excellence By Design: “Relentless innovation is the notion that anything we are doing today may be irrelevant tomorrow, and therefore we should continuously improve and reinvent our capabilities to ones that create the most long-term value.

Technology is evolving at a rate faster than most organizations’ ability to adopt or integrate those capabilities effectively.  As a result, a company’s ability to leverage these advances becomes increasingly challenging over time, especially to the degree that the underlying environment isn’t architected in a manner to facilitate their integration and adoption. 

The upshot of this is that the benefits to be achieved could be marginalized as any attempts to capitalize on these innovations will likely become point solutions or one-off efforts that don’t scale or create a different form of technical debt over time.  This is very evident in areas like analytics where capabilities like GenAI and other artificial intelligence-oriented solutions are only as effective as the underlying architecture of the environment into which they are integrated.  Are wins possible that could be material from a business standpoint?  Absolutely yes.  Will it be easy to scale them if you don’t invest in foundational things to enable that?  Very likely not.

The positive side of this is that technology is in a much different place than it was ten or twenty years ago, where it can significantly improve or enhance a company’s capabilities or competitive position.  Even in the most arcane of circumstances, there likely is an opportunity for technology to fuel change and growth in a digital business environment, whether that is internal to the operations of a company, or through its interactions with customers, suppliers, or partners (or some combination thereof).

Key Dimensions to Consider

Thinking about this area, a number of dimensions came to mind:

  • Promoting Courageous Leadership
    • This begins by acknowledging that leadership is critical to setting the stage for innovation over time
    • There are countless examples of organizations that were market leaders who ultimately lost their competitive advantage due to complacency or an inability to see or respond to changing market conditions effectively
  • Fueling Competitive Advantage
    • This is about understanding how technology helps create competitive advantage for a company and focusing in on those areas rather than trying to do everything in an unstructured or broad-based way, which would likely diffuse focus, spread critical resources, and marginalize realized benefits over time
  • Investing in Disciplined Experimentation
    • This is about having a well-defined process to enable testing out new business and technology capabilities in a way that is purposeful and that creates longer-term benefits
    • The process aspect of this important as it is relatively easy to spin up a lot of “innovation and improvement” efforts without taking the time to understand and evaluate the value and implications of those activities in advance. The problem of this being that you can either end up wasting money where the return on investment isn’t significant or that you can develop concepts that can’t easily be scaled to production-level solutions, which will limit their value in practice
  • Enabling Rapid Technology Adoption
    • This dimension is about understanding the role of architecture, standards, and governance in integrating and adopting new technical capabilities over time
    • As an example, an organization with an established component (or micro-service) architecture and integration strategy should be able to test and adopt new technologies much faster than one without them. That isn’t to suggest it can’t be done, but rather that the cost and time to execute those objectives will increase as delivery becomes more of a brute force situation than one enabled by a well-architected environment
  • Establishing a Culture of Sustainability
    • Following onto the prior point, as new solutions are considered, tested, and adopted, product lifecycle considerations should come into play.
    • Specifically, as part of the introduction of something new, is it possible to replace or retire something that currently exists?
    • At some point, when new technologies and solutions are introduced in a relatively ungoverned manner, it will only be a matter of time before the cost and complexity of the technology footprint will choke an organization’s ability to continue to both leverage those investments and to introduce new capabilities rapidly.

Measuring Impact

Several ways to think about impact:

  • Competitive Advantage
    • What is a company’s absolute position relative to its competition in markets where they compete and on metrics relative to those markets?
  • Market Differentiation
    • Is innovation fueling new capabilities not offered by competitors?
    • Is the capability gap widening or narrowing over time?
    • I separated these first two points, though they are arguably flavors of the same thing, to emphasize the importance of looking at both capabilities and outcomes from a competitive standpoint. One can be doing very well from a competitive standpoint relative to a given market, but have competitors developing or extending their capabilities faster, in which case, there could be risk of the overall competitive position changing in time
  • Reduced Time to Adopt New Solutions
    • What is the average length of time between a major technology advancement (e.g., cloud computing, artificial intelligence) becoming available and an organization’s ability to perform meaningful experiments and/or deploy it in a production setting?
    • What is the ratio of investment on infrastructure in relation to new technologies meant to leverage it over time?
  • Reduced Technical Debt
    • What percentage of experiments turn into production solutions?
    • How easy is to scale those production solutions (vertically or horizontally) across an enterprise?
    • Are new innovations enabling the elimination of other legacy solutions? Are they additive and complementary or redundant at some level?

Accelerate

What It Is and Why It Matters

Take as much as time as you need, let’s make sure we do it right, no matter what.”  This is a declaration that I don’t think I’ve ever heard in nearly thirty-two years in technology.  Speed matters, “first mover advantage”, or any other label one could place upon the desire to produce value at a pace that is at or beyond an organization’s ability to integrate and assimilate all the changes.

That being said, the means to speed is not just a rush to iterative methodology.  The number of times I’ve heard or seen “Agile Transformation” (normally followed by months of training people on concepts like “Scrum meetings”, “Sprints”, and “User Stories”) posed as a silver bullet to providing disproportionate delivery results goes beyond my ability to count and it’s unfortunate.  Similarly, I’ve heard glorified versions of perpetual hackathons championed, where the delivery process involves cobbling together solutions in a “launch and learn” mindset that ultimately are poorly architected, can’t scale, aren’t repeatable, create massive amounts of technical debt, and never are remediated in production.  These are cases where things done in the interest of “speed” actually destroy value over time.

That being said, moving from monolithic to iterative (or product-centric) approaches and DevSecOps is generally a good thing to do.  Does this remedy issues in a business/IT relationship, solve for a lack of architecture, standards and governance, address an overall lack of portfolio-level prioritization, or a host of other issues that also affect operating performance and value creation over time?  Absolutely not.

The dimensions discussed in this section are meant to highlight a few areas beyond methodology that I believe contribute to delivering value at speed, and ones that are often overlooked in the interest of a “quick fix” (which changing methodology generally isn’t).

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Optimizing Portfolio Investments
    • Accelerating delivery begins by first taking a look at the overall portfolio makeup and ensuring the level of ongoing delivery is appropriate to the capabilities of the organization. This includes utilization of critical knowledge resources (e.g., planning on a named resource versus an FTE-basis), leverage of an overall release strategy, alignment of variable capacity to the right efforts, etc.
    • Said differently, when an organization tries to do too much, it tends to do a lot of things ineffectively, even under the best of circumstances. This does not help enhance speed to value at the overall level
  • Promoting Reuse, Standards, and Governance
    • This dimension is about recognizing the value that frameworks, standards and governance (along with architecture strategy) play in accelerating delivery over time, because they become assets and artifacts that can be leveraged on projects to reduce risk as well as effort
    • Where these things don’t exist, there almost certainly will be an increase in project effort (and duration) and technical debt that ultimately will slow progress on developing and integrating new solutions into the landscape
  • Facilitating Continuous Improvement
    • This dimension is about establishing an environment where learning from mistakes is encouraged and leveraged proactively on an ongoing basis to improve the efficacy of estimation, planning, execution, and deployment of solutions
    • It’s worth noting that this is as much an issue of culture as of process, because teams need to know that it is safe, expected, and appreciated to share learnings on delivery efforts if there is to be sustainable improvement over time
  • Promoting Speed to Value
    • This is about understanding the delivery process, exploring iterative approaches, ensuring scope is managed and prioritized to maximize impact, and so on
    • I’ve written separately that methodology only provides a process, not necessarily a solution to underlying cultural or delivery issues that may exist. As such, it is part of what should be examined and understood in the interest of breaking down monolithic approaches and delivering value at a reasonable pace and frequency, but it is definitely not a silver bullet.  They don’t, nor will they ever exist.
  • Establishing a Culture of Quality
    • In the proverbial “Good, Fast, or Cheap” triangle, the general assumption is that you can only choose two of the three as priorities and accept that the third will be compromised. Given that most organizations want results to be delivered quickly and don’t have unlimited financial resources, the implication is that quality will be the dimension that suffers.
    • The irony of this premise is that, where quality is compromised repeatedly on projects, the general outcome is that technical debt will be increased, maintenance effort along with it, and future delivery efforts will be hampered as a consequence of those choices
    • As a result, in any environment where speed is important, quality needs to be a significant focus so ongoing delivery can be focused as much as possible on developing new capabilities and not fixing things that were not delivered properly to begin with

Measuring Impact

Several ways to think about impact:

  • Reduced Time to Market
    • What is the average time from approval to delivery?
    • What is the percentage of user stories/use cases delivered per sprint (in an iterative model)? What level of spillover/deferral is occurring on an ongoing basis (this can be an indicator of estimation, planning, or execution-related issues)?
    • Are retrospectives part of the delivery process and valuable in terms of their learnings?
  • Increase in Leverage of Standards
    • Is there an architecture review process in place? Are standards documented, accessible, and in use?  Are findings from reviews being implemented as an outcome of the governance process?
    • What percentage of projects are establishing or leveraging reusable common components, services/APIs, etc.?
  • Increased Quality
    • Are defect injection rates trending in a positive direction?
    • What level of severity 1/2 issues are uncovered post-production in relation to those discovered in testing pre-deployment (efficacy of testing)?
    • Are criteria in place and leveraged for production deployment (whether leveraging CI/CD processes or otherwise)?
    • Is production support effort for critical solutions decreasing over time (non-maintenance related)?
  • Lower Average Project Cost
    • Is the average labor cost/effort per delivery reducing on an ongoing basis?

Optimize

What It Is and Why It Matters

Along with the pursuit of speed, it is equally important to pursue “simplicity” in today’s complex technology environment.  With so many layers now being present, from hosted to cloud-based solutions, package and custom software, internal and externally integrated SaaS and PaaS solutions, digital equipment and devices, cyber security requirements, analytics solutions, and monitoring tools… complexity is everywhere.  In large organizations, the complexity tends to be magnified for many reasons, which can create additional complexities in and across the technology footprint and organizations required to design, deliver, and support integrated solutions at scale.

My experience with optimization historically is that it tends to be too reactive of a process, and generally falls by the wayside when business conditions are favorable.  The problem with this is the bloat and inefficiency that tends to be bred in a growth environment, that ultimately reduces the value created by IT with increasing levels of spend.  That is why a purposeful approach that is part of a larger portfolio allocation strategy is important.  Things like workforce and sourcing strategy, modernization, ongoing rationalization and simplification, standardization and continuous improvement are important to offset what otherwise could lead to a massive “correction” the minute conditions change.  I would argue that, similar to performance improvement in software development, an organization should never be so cost inefficient that a massive correction is even possible.  For that to be the case, something extremely disruptive should have occurred, otherwise the discipline in delivery and operations likely wasn’t where it needed to be leading up to that adjustment.

I’ve highlighted a few dimensions that are top of mind in regard to ongoing optimization, but have written an entire article on optimizing value over cost that is a more thorough exploration of this topic if this is of interest (Optimizing the Value of IT).

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Reducing Complexity
    • There is some very simple math related to complexity in an IT environment, which is that increasing complexity drives a (sometimes disproportionate) increase in cost and time to deliver solutions, especially where there is a lack of architecture standards and governance
    • In areas like Integration and Analytics, this is particularly important, given they are both foundational and enable a significant amount of business capabilities when done well
    • It is also important to clarify that reducing complexity doesn’t necessarily equate to reducing assets (applications, data solutions, technologies, devices, integration endpoints, etc.), because it could be the case that the number of desired capabilities in an organization requires an increasing number of solutions over time. That being said, with the right integration architecture and associated standards, as an example, the ability to integrate and rationalize solutions will be significantly easier and faster than without them (which is complexity of a different kind)
  • Optimizing Ongoing Costs
    • I recently wrote an article on Optimizing the Value of IT, so I won’t cover all that material again here
    • The overall point is that there are many levers available to increase value while managing or reducing technology costs in an enterprise
    • That being said, aggregate IT spend can and may increase over time, and be entirely appropriate depending on the circumstances, as long as the value delivered increases proportionately (or in excess of that amount)
  • Continually Modernizing
    • The mental model that I’ve had for support for a number of years is to liken it to city planning and urban renewal. Modernizing a footprint is never a one-time event, it needs to be a continuous process
    • Where this tends to break down in many organizations is the “Keep the Lights On” concept, which suggests that maintenance spend should be minimized on an ongoing basis to allow the maximum amount of funding for discretionary efforts that advance new capabilities
    • The problem with this logic is that it can tend to lead to neglect of core infrastructure and solutions that then become obsolete, unsupportable, pose security risks, and that approach end of life with only very expensive and disruptive paths to upgrade or modernize them
    • It would be far easier to carve out a portion of the annual spend allocation for a thoughtful and continuous modernization where these become ongoing efforts, are less disruptive, and longer-term costs are managed more effectively at lower overall risk
  • Establishing and Maintaining a Workforce Strategy
    • I have an article in my backlog for this blog around workforce and sourcing strategy, having spent time developing both in the past, so I won’t elaborate too much on this right now other than to say it’s an important component in an organizational strategy for multiple reasons, the largest being that it enables you to flex delivery capability (up and down) to match demand while maintaining quality and a reasonable cost structure
  • Proactively Managing Performance
    • Unpopular though it is, my experience in many of the organizations in which I’ve worked over the years has been that performance management is handled on a reactive basis
    • Particularly when an organization is in a period of growth, notwithstanding extreme situations, the tendency can be to add people and neglect the performance management process with an “all hands, on deck” mentality that ultimately has a negative impact on quality, productivity, morale, and other measures that matter
    • This isn’t an argument for formula-driven processes, as I’ve worked in organizations that have forced performance curves against an employee population, and sometimes to significant, detrimental effect. My primary argument is that I’d rather have an environment with 2% involuntary annual attrition (conceptually), than one where it isn’t managed at all, market conditions change, and suddenly there is a push for a 10% reduction every three years, where competent “average” talent is caught in the crossfire.  These over-corrections cause significant disruption, have material impact on employee loyalty, productivity, and morale, and generally (in my opinion) are the result of neglecting performance management on an ongoing basis

Measuring Impact

Several ways to think about impact:

  • Increased Value/Cost Ratio
    • Is the value delivered for IT-related effort increasing in relation to cost (whether the latter is increasing, decreasing, or remaining flat)?
  • Reduced Overall Assets
    • Have the number of duplicated/functionally equivalent/redundant assets (applications, technologies, data solutions, devices, etc.) reduced over time?
  • Lower Complexity
    • Is the percentage of effort on the average delivery project spent on addressing issues related to a lack of standards, unique technologies, redundant systems, etc. reducing over time?
  • Lower Technical Debt
    • What percentage of overall IT spend is committed to addressing quality, technology, end-of-life, or non-conformant solutions (to standards) in production on an ongoing basis?

Inspire

What It Is and Why It Matters

Having written my last article on culture, I’m not going to dive deeply into the topic, but I believe the subject of employee engagement and retention (“People are our greatest asset…”) is often spoken about, but not proportionately acted on in deliberate ways.  It is far different, as an example, to tell employees their learning and development is important, but then either not provide the means for them to receive training and education or put “delivery” needs above that growth on an ongoing basis.  It’s expedient on a short-term level, but the cost to an organization in loyalty, morale, and ultimately productivity (and results) is significant.

Inspiration matters.  I fundamentally believe you achieve excellence as an organization by enrolling everyone possible in creating a differentiated and special workplace.  Having worked in environments where there was a contagious enthusiasm in what we were doing and also in ones I’d consider relatively toxic and unhealthy, there’s no doubt on the impact it has on the investment people make in doing their best work.

Following onto this, I believe there is also a distinction to be drawn in engaging the “average” employees across the organization versus targeting the “top performers”.  I have written about this previously, but top performers, while important to recognize and leverage effectively, don’t generally struggle with motivation (it’s part of what makes them top performers to begin with).  The problem is that placing a disproportionate amount of management focus on this subset of the employee population can have a significant adverse impact, because the majority of an organization is not “top performers” and that’s completely fine.  If the engagement, output, and productivity of the average employee is elevated even marginally, the net impact to organizational results should be fairly significant in most environments.

The dimensions below represent a few ways that I think about employee engagement and creating an inspired workplace.

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Becoming an Employer of Choice
    • Reputation matters. Very simple, but relevant point
    • This becomes real in how employees are treated on a cultural and day-to-day level, compensated, and managed even in the situation where they exit the company (willingly or otherwise)
    • Having worked for and with organizations that have had a “reputation” that is unflattering in certain ways, the thing I’ve come to be aware of over time is how important that quality is, not only when you work for a company, but the perception of it that then becomes attached to you afterwards
    • Two very simple questions to employees that could serve as a litmus test in this regard:
      • If you were looking for a job today, knowing what you know now, would you come work here again?
      • How likely would you be to recommend this as a place to work to a friend?
    • Promoting a Healthy Culture
      • Following onto the previous point, I recently wrote about The Criticality of Culture, so I won’t delve into the mechanics of this beyond the fact that dedicated, talented employees are critical to every organization, of any size, and the way in which they are treated and the environment in which they work is crucial to optimizing the experience for them and the results that will be obtained for the organization as a whole
    • Investing in Employee Development
      • Having worked in organizations where there was both an explicit, dedicated commitment to ongoing education and development and others where there was “never time” to invest in or “delivery commitments” that interfered with people’s learning and growth, the consequent impact on productivity and organizational performance has always been fairly obvious and very negative from my perspective
      • A healthy culture should create space for people to learn and grow their skills, particularly in technology, where the landscape is constantly changing and there is a substantial risk of skills becoming atrophied if not reinforced and evolved as things change.
      • This isn’t an argument for random training, of course, as there should be applicability for the skills into which an organization invests on behalf of its employees, but it should be an ongoing priority as much as any delivery effort so you maintain your ability to integrate new technology capabilities as and when they become available over time
    • Facilitating Collaboration
      • This and the next dimension are both discussed in the above article on culture, but the overall point is that creating a productive workplace goes beyond the individual employee to encouraging collaboration and seeking the kind of results discussed in my article on The Power of N
      • The secondary benefit from a collaborative environment is the sense of “connectedness” it creates across teams when it’s present, which would certainly help productivity and creativity/solutioning when part of a healthy, positive culture
    • Creating an Environment of Transparency
      • Understanding there are always certain things that require confidentiality or limited distribution (or both), the level of transparency in an environment helps create connection between the individual and the organization as well as helping to foster and engender trust
      • Reinforcing the criticality of communication in creating an inspiring workplace is extremely obvious, but having seen situations where the opposite is in place, it’s worth noting regardless

Measuring Impact

Several ways to think about impact:

  • Improved Productivity
    • Is more output being produced on a per FTE basis over time?
    • Are technologies like Copilot being leveraged effectively where appropriate?
  • Improved Average Utilization
    • Are utilization statistics reflecting healthy levels (i.e., not significantly over or under allocated) on an ongoing basis (assuming plan/actuals are reasonably reflected)?
  • Improved Employee Satisfaction
    • Are employee surveys trending in a positive direction in terms of job satisfaction?
  • Lower Voluntary Attrition
    • Are metrics declining in relation to voluntary attrition?

Perform

What It Is and Why It Matters

Very simply said: all the aspirations to innovate, grow, and develop capabilities don’t mean a lot if your production environment doesn’t support business and customer needs exceptionally well on a day-to-day basis.

As a former account executive and engagement manager in consulting at various organizations, any account strategy for me always began with one statement: “Deliver with quality”. If you don’t block and tackle well in your execution, the best vision and set of strategic goals will quickly be set aside until you do.  This is fundamentally about managing infrastructure, availability, performance of critical solutions, and security.  In all cases, it can be easy to operate in a reactive capacity and be very complacent about it, rather than looking for ways to improve, simplify, and drive greater stability, security, and performance over time. 

As an example, I experienced a situation where an organization spent tens of millions of dollars annually on production support, planning for things that essentially hadn’t broken yet, but had no explicit plan or spend targeted at addressing the root cause of the issues themselves.  Thankfully, we were able to reverse that situation, plan for some proactive efforts that ultimately took millions out of that spend by simply executing a couple projects.  In that case, the issue was the mindset, assuming that we had to operate in a reactive rather than proactive way, while the effort and dollars being consumed could have been better applied developing new business capabilities rather than continuing to band-aid issues we’d never addressed.

Another situation that is fairly prevalent today is the role of FinOps in managing cloud costs.  Without governance, the convenience of spinning up cloud assets and services can add considerable complexity, cost, and security exposure, all under the promise of shifting from a CapEx to OpEx environment.  The reality is that the maturity and discipline required to manage it effectively requires focus so it doesn’t become problematic over time.

There are many ways to think about managing and optimizing production, but the dimensions that come to mind as worthy of some attention are expressed below.

Key Dimensions to Consider

Dimensions that are top of mind in relation to this area:

  • Providing Reliability of Critical Solutions
    • Having worked with a client where the health of critical production solutions was in a state where that became the top IT priority, this can’t be overlooked as a critical priority in any strategy
    • It’s great to advance capabilities through ongoing delivery work, but if you can’t operate and support critical business needs on a daily level, it doesn’t matter
  • Effectively Managing Vulnerabilities
    • With the increase in complexity in managing technology environments today, internal and external to an organization, cyber exposure is growing at a rate faster than anyone can manage it fully
    • To that end, having a comprehensive security strategy, from managing external to internal threats, ransomware, etc. (from the “outside-in”) is critical to ensuring ongoing operations with minimal risk
  • Evolving Towards a “Zero Trust” Environment
    • Similar to the previous point, while the definition of “zero trust” continues to evolve, managing a conceptual “least privilege” environment (from the “inside-out”) that protects critical assets, applications, and data is an imperative in today’s complex operating environment
  • Improving Integrated Solution Performance
    • Again, with the increasing complexity and distribution of solutions in a connected enterprise (including third party suppliers, partners, and customers), the end user experience of these solutions is an important consideration that will only increase in importance
    • While there are various solutions for application performance monitoring (APM) on the market today, the need for integrated monitoring, analytics, and optimization tools will likely increase over time to help govern and manage critical solutions where performance characteristics matter
  • Developing a Culture Surrounding Security
    • Finally, in relation to managing an effective (physical and cyber) security posture, while a deliberate strategy for managing vulnerability and zero trust are the methods by which risk is managed and mitigated, equally there is a mindset that needs to be established and integrated into an organization for risk to be effectively managed
    • This dimension is meant to recognize the need to provide adequate training, review key delivery processes (along with associated roles and responsibilities), and evaluate tools and safeguards to create an environment conducive to managing security overall

Measuring Impact

Several ways to think about impact:

  • Increased Availability
    • Is the reliability of critical production solutions improving over time and within SLAs?
  • Lower Cybersecurity Exposure
    • Is a thoughtful plan for managing cyber security in place, being executed, monitored, and managed on a continuous basis?
    • Do disaster recovery and business continuity plans exist and are they being tested?
  • Improved Systems Performance
    • Are end user SLAs met for critical solutions on an ongoing basis?
  • Lower Unplanned Outages
    • Are unplanned outages or events declining over time?

Wrapping Up

Overall, the goal of this article was to share some concepts surrounding where I see the value of strategy for IT in enabling a business at an overall level.  I didn’t delve into what the makeup of the underlying technology landscape is or should be (things I discuss in articles like The Intelligent Enterprise and Perspective on Impact Driven Analytics), because the point is to think about how to create momentum at an overall level in areas that matter… innovation, speed, value/cost, productivity, and performance/reliability.

Feedback is certainly welcome… I hope this was worth the time to read it.

-CJG 12/05/2023

The Value and Risk of RACI

When looking towards the Delivering at Speed dimension of Excellence by Design, it’s worthwhile to understand roles and responsibilities and, more importantly, the criticality of effective communication and collaboration in delivery.  To that end, I wanted to provide a quick commentary on the value and risk of RACI as an enabler in the process.

Many who have worked with me know that I’m not a fan of the RACI tool.  In this article, I’ll cover what I consider good and not so good about it.  Hopefully the concepts will be helpful.

At the end of the day, what makes teams effective is a collective investment in success.  That takes courage and a willingness to do whatever it takes to deliver, particularly if a project is complex or high risk.  Where individuals and teams don’t lean into that discomfort, things can easily become imbalanced, inefficient, and ineffective… and the opportunity for excellence is lost.  The ultimate reality is that technology delivery is messy and complex, it involves dealing with adversity and is not for the faint of heart, which is why courageous leadership is the first and most critical dimension to driving excellence in an organization.

A Quick Refresher

For those who may be unfamiliar, the RACI tool is used to help clarify roles and responsibilities across a set of constituents against a defined set of activities, deliverables, or whatever is relevant to the conversation.

The process is generally to have a facilitator populate a grid in two dimensions, with stakeholders or teams as a set of rows and the activities or deliverables across an entire project lifecycle (as an example) as the columns.  Once the teams and work are clarified, the team then typically goes a column at a time, noting which teams have which responsibilities using the RACI notation to indicate who is:

  • R – Responsible (DOER – primarily responsible for performing the activity)
  • A – Accountable (LEADER – ultimately accountable for the execution)
  • C – Consulted (ADVISOR – asked to provide input, in a supporting role)
  • I – Informed (LISTENER – notified of the status or outcome, but not involved)

A conceptual example of a completed RACI chart could look something like this:

Generally, there should only be one “Accountable” party per activity, though there can be more than one individual or team “Responsible” for performing the work.  In many cases, “R” and “A” go together, though there can be situations where someone is playing a general contractor-type role who is Accountable, but someone else is actually playing a subcontracting-type role who is Responsible for performing the work.  In one of my previous employers, we occasionally collapsed the “R” and “A” categories into a single “Owner” (O) role, which indicated the individual or team who was both responsible and accountable and simplified the facilitation of the exercise.

 

What’s Good about RACI

In my experience, the value of a RACI discussion is in the conversation, not the tool. 

The conversation is helpful in two primary respects:

  • Clarifying the scope and breadth of activities/ deliverables/ responsibilities that are associated with whatever the cross-functional team is trying to accomplish
  • Having an understanding of the anticipated interactions of that cross-functional team against those activities

On the latter point, the exercise can be particularly helpful for a newly formed team or on a new type of effort where the combination of activities is emerging and the interactions across the team against those activities isn’t clearly understood.

The discussion itself gives the team a chance to engage, interact, experience the various communication and leadership styles and, in the process, talk about the work they need to perform.

 

Where Things Go Awry

…So what’s the problem?

Well, the problem is sometimes in the mindset of the participants as they enter the discussion and how the tool is ultimately used in practice.

Things to watch for in a RACI discussion:

  • Asserting Control / Promoting Exclusion
    • There are times when participants use the tool and process as a way to establish their authority to make decisions (as the “Accountable” party) in a way that excludes others
    • In these cases, the RACI tool can become a hammer that enables dysfunction and empowers poor leadership
  • Showing a Lack of Accountability
    • There are times when the tone of discussion shifts towards an “us” and “them” conversation and the concept of “team” is subjugated to who is accountable if something goes wrong.
    • In this situation, the tool becomes a hammer to assign blame and undermine partnership
  • Encouraging a Lack of Collaboration
    • Finally, the stronger the contrast between “RA” and “C” comes across, there is risk of an underlying level of dysfunction that goes beyond activities and deliverables
    • While the tool and process are meant to help foster healthy discussion on primary accountability and roles, an extreme version of its use can feel like there is a lot of “throwing things over the wall”… and that is normally something you can hear in the discussion itself

Summing this up, while RACI can be a useful tool, it can also be a mechanism to stratify dysfunction in an organization, enable poor leadership, assign blame, and do more harm than good.

 

Breaking Down the Model

In thinking about the above, the question arises: Ok, so what do you do about it?

In a previous employer where we conducted a lot of client workshops, we would start with a predefined set of ground rules and allow the clients to add to the list as they saw fit.  Depending on the group assembled, there were times when that flexibility actually would go astray and the rules became a long, laundry list of “what not to dos”.

If the discussion started to feel unhealthy, we would suggest that we reset the list back to two things:

  • Do what makes sense
  • Do the right thing

In practice, almost any situation that would arise in a workshop setting could be addressed with those two principles and they are simple and broad enough that they cover what you need to facilitate a session on about any topic.

Going back to RACI, when the discussions go astray, the same type of principles may be helpful to set the tone for collaboration as part of the effort.

 

Summing It Up

Stepping back from the tools and process, the critical point to remember is the importance of communication and collaboration in a cross-functional team.

In my experience, when people are effective collaborators and the underlying relationships are sound, there isn’t a need for RACI discussions.  People work past boundaries, sometimes swap responsibilities where the capabilities of individuals are roughly equivalent, and the team is focused less on “who owns what” and doing what they need to do to meet the conditions of success.  There is a mindset of mutual support and partnership… and the efficiency of the execution will be much higher by extension.

Most of the time, when I hear someone request or suggest a RACI discussion, I assume there is an underlying issue or source of dysfunction.  It doesn’t mean the conversations can’t be useful in helping to surface and address those concerns and challenges, but it is important to understand they are not a cure all if the outcome is just a snapshot of something that wasn’t working effectively in the first place.

Hopefully the concepts were helpful.  As always, feedback is welcome and appreciated.

-CJG 06/06/2022

Excellence By Design

Background

As I began this journey and subsequently to assemble topics about which to write, I noticed that there were both an overwhelming set of ideas coming (a good problem to have) and a very unclear relationship in the concepts that were running quite rapidly through my mind (not a good thing).

Upon further reflection, it occurred to me that the ideas all centered around the various dimensions of leading a technology organization at different levels of specificity.  To that end, I thought I should set the stage a bit, in the interest of making things more cohesive in what I may write from here.

 

On the Pursuit of Excellence

At an overall level, what better place to start than a simple premise: Excellence is a choice.

Shooting for excellence is a commitment that requires a lot on a practical level, starting with courageous leadership, because it is a perpetually moving target, requires adaptability, tenacity, and a willingness to accept change as a way of life.  Excellence isn’t accidental, it is a matter of organizational will and the passion to pursue aspirations beyond what, at times, may feel “realistic” or “practical”.  It requires a belief in what is possible and is defined along multiple dimensions, which we’ll explore briefly here, namely:

  • Relentless Innovation
  • Operating with Agility
  • Framework-Driven Design -and-
  • Delivering at Speed

Relentless Innovation

Starting with vision, some questions to consider in the context of an overall strategy:

  • Is it clear and understood across the organization, along with its intended outcome (e.g., what success looks like)?
  • Is it one that connects to individuals in the organization, their roles and ongoing contributions, or are those disconnected concepts (i.e., is it something that individuals take to heart)?
  • Can it evolve as circumstances change while maintaining a degree of fundamental integrity (e.g., will it stand the test of time or need to be continually redefined)?
  • Is it actionable? Can tangible steps be taken to drive progress towards its ultimate goals?
  • Is it “deliberate”/intended/proactive or was it defined in a reactive context (e.g., in response to a competitor’s actions)?
  • Are day-to-day decisions made with the strategy in mind?

Overall, the point is to have a thoughtful, proactive strategy, that is actionable, connected to ongoing decisions, and embraced by the broader organization.

Where this becomes more interesting is in how we think of strategy in relation to change, which is where the next concept comes into play.  Relentless innovation is the notion that anything we are doing today may be irrelevant tomorrow, and therefore we should continuously improve and reinvent our capabilities to ones that create the most long-term value. This is much easier said than done, because it requires a lot of organizational humility and a willingness to tear down existing structures and rebuild new ones in their place.  That forces a degree of risk tolerance, because there is safety in the established practices and solutions of today, especially if they’ve created value.  On the other hand, success can be very detrimental insofar as complacency can become part of the organizational mindset and change slows down to an environment that is essentially an iteration of the present.

 

Operating with Agility

Looking at IT Operations, a number of questions come to mind that may be the subject of future articles:

  • Is there a mindset of being cost-efficient (driving the highest value/cost ratio)?
  • Is there a culture of continuous improvement and innovation in place?
  • Is there a strategy for incorporating and optimizing the relationship of project and product teams (to the extent that a full product orientation isn’t feasible)?
  • Is there a sourcing strategy in place that is deliberate, governed, optimized (whether insourced, outsourced, or some combination thereof)?
  • Are portfolio management processes effective and aligned to business strategy?
  • Is there a highly transparent, but extremely lightweight operating infrastructure in place to facilitate engagement and value creation?
  • To what degree is talent rotation and development part of the culture? Are people stuck in the same organization or silo for long periods of time, or are high potential leaders moved between teams to facilitate a higher degree of knowledge sharing, development, and improvement?

Having worked in IT Ops, the largest issue I’ve seen in a number of companies is an overly significant focus on process and infrastructure by comparison with transparency and enablement.  This is a tricky balance to strike, but arguably, I’d much rather have a less “mature” operating environment (IT for IT) that produces directionally correct information and drives engagement than a heavy, cumbersome process that becomes a distraction from producing business outcomes.  A simple litmus test on the latter type of environment being in place is whether, in discussion, teams talk about the process and tools versus the outcomes, decisions, and impact.

 

Framework-Driven Design

Shifting focus to technology, I believe the opportunity is to think differently about the overall solution architecture of future ecosystems.  Much has been written and discussed relative to modern or cloud native applications, data-centric design, DevSecOps, domain-driven design, and so on.

What fundamentally bothers me about solution design approaches is that, when focusing on one dimension (e.g., data centricity), other dimensions of the more holistic view of modern application design is left out, and then it becomes a challenge to delivery teams to integrate one or more of these concepts in practice without a way to synthesize them into one cohesive approach.  This is where framework-centric design can be an interesting approach to consider.

In my definition, framework-centric design is focused on architecting a connected ecosystem and operating environment intended to promote resiliency, interoperability, and application-agnostic integration such that individual solution components can be upgraded or replaced over time at a rapid pace without disrupting the capability of the ecosystem as a whole.

I will explore this topic further in a future article, but the base premise is to design an overall solution that performs complex tasks given multiple components integrated in standardized ways, leveraging modern, cloud native technologies, with integrated data that feeds embedded analytics capabilities as part of the operation of the ecosystem.

The framework itself, therefore, becomes a platform and the individual components are treated as replaceable parts that enable a best-of-breed mentality as new capabilities emerge that become advantageous to integrate with the framework over time.

 

Delivering at Speed

From a delivery standpoint, as tempting as it is to write about iterative development (or Agile in particular) as a cure all, the reality is that more organizations suffer from a lack of discipline than a lack of methodology. 

The unfortunate myth that needs to be explored and unwound is that executing with discipline means value will be delayed when, in fact, the exact opposite is true.  It is a generalization, but the faster a build team moves (to the extent that process or rigor is abandoned), the immediate impact is usually a level of technical debt that will create drag, either in the initial or subsequent delivery efforts.

Quality doesn’t happen by accident.  It is something that needs to be planned and built into a work product from the kickoff of a delivery effort, regardless of the methodology or operating model employed.

I will likely write more on this topic given the number of opportunities that exist, but it’s sufficient to say that you can’t achieve excellence when you don’t execute as flawlessly as possible… and discipline is needed to accomplish that.

 

Wrapping Up

Overall, the goal was to provide a quick summary of the various dimensions that I believe are important to consider in leading an organization.  No doubt, there may be questions or omissions (intentional or unintended) as this was a first blush at how I think about it. 

What about people and culture?  Well… that’s part of operating effectively… as an example.

Hopefully this was a good starting point and provided some food for thought.  Feedback, questions, and reactions are always welcome.

Looking forward to continuing this journey.

-CJG 10/28/2021

On Health and Transparency

Having spent a number of years in project and program management (as well as IT Operations), one of my favorite subjects is transparency.

For the purposes of this article, I’m focusing on “traditional” project and program delivery (irrespective of methodology), not product-based operating environments, though many concepts would apply to both.

 

Context

An inevitable truth of managing programs (especially large ones) is: Things will change.  I think of these changes as “points of inflection”, namely times when some level of pivot is needed either within a single project or across multiple work efforts in a coordinated fashion to get things realigned, stable, and back to operating in a healthy way… typically with a revised delivery schedule.

What makes a programs successful at these points of inflection is three things:

  1. Having good transparency – Knowing where you are at the present time in relation to your ultimate goal, so that you can evaluate the remedy (or remedies) needed to adjust course
  2. Making the right “turn” – Adjusting the scope, approach, plan, work efforts, risk profile, etc. so that you address the issues identified in a way that increases operating stability, plan integrity, and quality of the ultimate solution and/or deliverables
  3. Turning quickly – Making the appropriate adjustments as quickly as possible after the issues are identified to rectify the situation. This can’t be emphasized enough, given a significant amount of disruption, cost, and other negative impacts tend to come as a result of organizational inaction related to materialized risks.  As has been said in consulting many times, “bad news doesn’t get better with time”… and the sooner critical issues or risks are addressed, the better.

So looking at the above, the critical dependency is transparency because, without an understanding of where you are, there’s no way to evaluate whether the changes you make will ultimately help or hurt your delivery effort.

 

The Challenge

This is where the problem starts: Wanting transparency is different than needing lots of metrics, and the latter is where a significant percentage of projects and IT operations efforts waste time, energy, and cost that ultimately undermine delivery and/or operational excellence.  The goal of transparency is to enable governance and risk management.  Reporting in of itself is administrative in nature and adds marginal value (unless it is of a regulatory nature, of course).

What tends to happen is that, rather than approach transparency in a top-down fashion, with a focus on enabling engagement and action, leaders take a bottom-up approach, assuming that the more data they have, the more they can ultimately understand and controlThis is a critical mistake that happens on projects all the time… reporting so much information that the fundamentals are lost amidst the noise. 

One root cause of the above situation is that there are a good number of leaders who see status as organizational comfort food, as if having excessive documentation will provide “proof of ongoing work” for customers or to create a level of air cover in the event that things ultimately go wrong.  On the latter point, having led high risk and complexity delivery efforts, particularly in consulting… the minute anything goes wrong, if you’re a project or program manager, you can pretty much assume you’ll be taking the blame regardless of whether the issue or risk has been documented on a status report… and that’s the job, and it’s ok… because no one said these jobs are easy.

This is either what makes project and program management work difficult or very rewarding, depending on how you choose to see it.  Your best work tends to be happening when nothing goes wrong, in which case people may well ask what you do all day.  In the converse scenario, when things are not going well, you may find yourself in the situation where people ask what you’ve been doing, and you won’t want the answer to involve having spent hours writing or updating reports.  Our goal in delivery is producing value (i.e., business outcomes), not status.

 

Some Examples

Three different situations (each from a different organization) come to mind where “reporting” got in the way of actual transparency:

First, I was asked to cover a delivery project from an oversight standpoint while the director responsible for the work was out on vacation.  The project manager, a trained PMP, had a well-organized and documented project, with an abundance of metrics.  A source of particular pride was the earned value (EVM) calculation, which indicated that the team was 90+% complete on the project.  Upon reviewing the artifacts, we realized that the effort related to producing deliverables was not included in the plan, and therefore, the EVM calculation didn’t include the full project scope and it was actually massively BEHIND where it was supposed to be (something in the 70+% range).  In practice, the project manager had spent so much time managing metrics that they forgot to make sure the scope of the project was represented in the plan, effectively compromising what they had been reporting for weeks.  The good news is that, having recognized the gap, we made the necessary corrections, pulled in some additional help, and got things back on track.  Overall, however, the lesson learned was that metrics are only as good as the understanding they provide for the underlying effort itself.

In another situation, I inherited a client engagement where one of the divisional CIOs had the team producing a forty-page status report covering all delivery efforts in the portfolio on a weekly basis.  It was well understood that the client wasn’t reading the report, but rather requested it to force the team to evaluate their project health on a continuing basis.  The implication being that writing status helps project managers focus on their project, which is actually the opposite of what happens in practice.  The more time project managers spend in administrivia, the less time they tend to spend on activities that add value, such as communicating with customers, engaging with their delivery team(s), identifying and managing risks, reviewing the plan for gaps and issues, and so on.  It took some convincing, but this was an activity that we ultimately eliminated that not only reduced administrative effort, but also allowed us to cut a full-time position that had been staffed solely to report on that portfolio of work.

Finally, I was asked to take a short-term assignment to help a divisional CIO organize his PMO, particularly the reporting being done across a relatively large and diverse portfolio of work.  Similar to previous situation, he showed me a stack of printed status roughly four inches thick that he was receiving weekly across his delivery leaders, ultimately which we replaced with a four-page dashboard organized across financial health, strategic programs, ongoing discretionary projects, and production support.  Rather than produce information on a project-by-project basis, we requested standard information from each delivery team at a summarized level that helped highlight areas where intervention was actually required and eliminated a significant amount of the administration related to written status.  In practice, the minute the yellow or red light goes on, someone likely needs to have a conversation anyway, so writing a lot of documentation doesn’t tend to help anyone.  The end result was less paperwork and a lot of relieved project managers.

 

Focusing on What Matters

Coming back to transparency, some fundamental concepts that I believe are important:

  • If you can’t understand the overall situation, the details don’t matter
  • Qualitative information can be just as good as quantitative data, given the quality of detailed project metrics is often suspect and imperfect anyway
  • Any metric that is reported should be actionable, otherwise it is likely noise and a distraction
  • Once things get into the ‘yellow’ or ‘red’ there is a discussion coming, so there is little value in writing a lot of language in a status report that ultimately will be discussed anyway
  • It’s completely fine (and often helpful) to track detailed items within a project, but report on summary characteristics from a governance standpoint

In line with the above, these are the primary seven ‘indicators’ (all in R/Y/G) that I consider important to track across projects in a program or portfolio of work, along with their conceptual definitions:

  • Overall Health – Indication of the Delivery Quality overall, given the following six dimensions
  • Scope – Requirements are clear and understood
  • Budget – Financially in line with expectations
  • Staffing – Team is skilled and staffed to deliver the work
  • Schedule – Work is being executed in line with expectations
  • Quality – Work product is in line with or exceeding expectations
  • Risk – Concerns areas are understood and being mitigated effectively

Where the indicators are defined as:

  • Green – On Track
  • Yellow – Needs Attention (50%+ chance of an issue)
  • Red – Requires Action (Issue exists, discussion or intervention needed)

This isn’t to say that things like next major milestone, %complete, %test cases planned/ executed/ passed, etc. are not useful to report, but they generally fall at a lower level of criticality once the above indicators are addressed.  My general preference is to have details only where there is additional insight needed (for example, Schedule is “yellow”, in which case progress metrics specific to the current activities may provide useful information).

 

Conclusion

While a lot has been written about these topics, the goal of this article was simply to share a few concepts about transparency, the role it plays in effective governance and risk management, and the often questionable value of reporting in itself.

There are many best practices related to transparency and reporting, but the main takeaways:

  • If you don’t have effective transparency, you are highly exposed to the experience of the people leading an effort, and your ability to govern and manage risk and change will be limited
  • If you don’t know how you will use information (down to the individual metrics) being reported on a delivery effort (or operational dashboard), don’t collect them. It’s likely a waste of time
  • The more you measure and report, the less you’ll likely notice the critical things that matter
  • The more time you force delivery leaders to spend on reporting, the less likely they are to be focused on their actual delivery responsibilities
  • The goal of transparency is to drive engagement and action (and value), not administration

I hope the information was useful… as always, feedback and comments welcome.

-CJG 08/21/2021