What It Is: App Rationalization is the process of reducing redundancies that exist in an application portfolio in the interest of reducing complexity, cost of ownership, and improving speed-to-market.
Why It Matters: Organizations typically spend anywhere between 50-80% of their IT budget maintaining and supporting systems in place. That limits investment in innovation and competitive advantage.
Key Concepts
Understand that rationalization is more about change management than technology
Ensure there are healthy relationships in place and strong leadership support for the work
Focus in on critical areas of the portfolio that drive cost. Don’t boil the ocean
Don’t worry about creating the perfect infrastructure day one. Clean that up along the way
Start with how your business operates and simplify and standardize processes first
Align your future blueprint as cleanly to your desired operating footprint as possible
Consider your Artificial Intelligence (AI), cloud, and security strategies in the future vision
Simplification can come through reducing both unique applications and instances of applications
Address how systems will be supported and enhanced moving forward in your design
Explicitly include milestones for decommissioning in your roadmap. Don’t let that go undone
Expect the work to continually evolve and adapt. Plan for change and adjust responsively
Include rationalization as part of your ongoing portfolio strategy so it’s not a one-time event
Approach
Align – Obtain organizational support critical to defining vision, scope, and facilitating change
Understand – Gather an understanding of the current state and alignment to operations
Evaluate – Leverage something like the Gartner TIME model to evaluate portfolio quality and fit
Strategize – Develop a future state blueprint, CBA, and proposed changes to the environment
Socialize – Obtain feedback, iterate, clarify the vision, and finalize the initial roadmap
Mobilize – Launch first wave of delivery, realign ongoing work as required
Execute – Deliver on 30-, 60-, and 90-day goals, governing and adjusting the approach as you go
It ought to be easier (and cheaper) to run a business than this…
Complexity and higher than desirable operating cost are prevalent in most medium- to large-scale organizations. With that, generally some interest follows in exploring ways to reduce and simplify the technology footprint, to reduce ongoing expenses, mitigate risk and limit security exposure, and free up capital either to reinvest in more differentiated and value-added activity, or to contribute directly to the bottom line overall.
The challenge is really in trying to find the right approach to simplification that is analytically sound while providing insight at speed so you can get to the work and not spend more time “analyzing” than is required at any step of the process.
In starting to outline the content for this article, aside from identifying the steps in a rationalization process and working through a practical example to illustrate some scenarios that can occur, I also started noting some other, more intangible aspects to the work that have come up in my experience. When that list reached ten different dimensions, I realized that I needed to split what was intended to be a single article on this topic into two parts: one that addresses the process aspects of simplification and one that addresses the more intangible and organizational/change management-oriented dimensions. This piece is focused on the intangibles, because the environment in which you operate is critical to setting the stage for the work and ultimate results you achieve.
The Remainder of this Article…
The dimensions that came to mind fell into three broader categories, into which they are organized below, they are:
Leading and Managing Change
Guiding the Process and Setting Goals
Planning and Governance
For each dimension, I’ll try to provide some perspective on why it matters and some potential ideas to consider in the interest of addressing them in the context of a simplification effort overall.
Leading and Managing Change
At its core, simplification is a change management and transformational activity and needs to be approached as such. It is as much about managing the intangibles and maintaining healthy relationships as anything to do with the process you follow or opportunities you surface. Certainly, the structural aspects and the methodology matter, but without giving attention to the items below, likely you will have either some very rough sledding in execution, suboptimize your outcomes, or fail altogether. Said differently: the steps you follow are only part of the work, improving your operating environment is critically important.
Leadership and Culture Matter
Like anything else that corresponds to establishing excellence in technology, courageous leadership and an enabling culture are fundamental to a simplification activity. The entire premise associated with this work rests in change and, wherever change is required,there will be friction and resistance, and potentially significant resistance at that.
Some things to consider:
Putting the purpose and objective of the change front and center and reinforcing it often (likely reducing operating expense in the interest of improving profitability or freeing up capital for discretionary spending)
Working with a win-win mindset, looking for mutual advantage, building partnerships, listening with empathy, and seeking to enroll as many people in the cause as possible over time
Being laser-focused on impact, not solely on “delivery”, as the outcomes of the effort matter
Remaining resilient, humble (to the extent that there will be learnings along the way), and adaptable, working with key stakeholders to find the right balance between speed and value
It’s Not About the Process, It’s About Your Relationships
Much like portfolio management, it is easy to become overly focused on the process and data with simplification work and lose sight of the criticality of maintaining a healthy business/technology partnership. If IT has historically operated in an order taker mode, suggesting potentially significant changes to core business applications that involve training large numbers of end users (and the associated productivity losses and operating disruptions that come with that) may go nowhere, regardless of how analytically sound your process is.
Some things to consider:
Know and engage your customer. Different teams have different needs, strategies, priorities, risk tolerance, and so on
You can gather data and analyze your environment (to a degree) independent of your business partners, but they need to be equally invested in the vision and plan for it to be successful
Establishing a cadence, individually and collectively, with key stakeholders, aligned to the pace of the work, minimally to maintain a healthy, transparent, and open dialogue on objectives, opportunities, risks, and required inventions and support, is important
Be a Historian as Much as You are an Auditor
Back to the point above on improving the operating environment being as important as your process/ methodology, it is important to recognize something up front in the simplification process: you need to understand how you got where you are as part of the exercise, or you may end right back there as you try to make things “better”. It could be that complexity is a result of a sequence of acquisitions, a set of decentralized decisions without effective oversight or governance, functional or capability gaps in enterprise solutions being addressed at a “local” level, underlying culture or delivery issues, etc. Knowing the root causes matters.
As an example, I once saw a situation where two teams implemented different versions of the same application (in different configurations) purely because the technology leaders didn’t want to work with each other. The same application could’ve supported both organizations, but the decisions were made without enterprise-level governance, the operating complexity and TCO increased, and the subsequent cost to consolidate into a single instance was deemed “lower priority” than continuing work. While this is a very specific example, the point is that understanding how complexity is created can be very important in pivoting to a more streamlined environment.
Some things to consider:
As part of the inventory activity, look beyond pure data collection to having an opportunity to understand how the various portfolios of applications came about over time, the decisions that led to the complexity that exists, the pain points, and what is viewed as working well (and why)
Use the insights obtained to establish a set of criteria to consider in the formation of the vision and roadmap for the future so you have a sense whether the changes you’re making will be sustainable. These considerations could also help identify risks that could surface during implementation that could reintroduce the kind of complexity in place today
What Defines “Success”
Normally, a simplification strategy is based on a snapshot of a point in time, with an associated reduction in overall cost (or shift in overall spend distribution) and/assets (applications, data solutions, etc.). This is generally a good way to establish the case for change and desired outcome of the activity itself, but it doesn’t necessarily cover what is “different” about the future state beyond a couple core metrics. I would argue that it is also important to consider what I mentioned in the previous point, which is how the organization developed a complex footprint to begin with.
As an example, if complexity was caused by a rapid series of acquisitions, even if I do a good job of reducing or simplifying the footprint in place, if I continue to acquire new assets, I will end up right back where I was, with a higher operating cost than I’d like. In this case, part of your objective could be to have a more effective process for integrating acquisitions.
Some things to consider:
Beyond the financial and operating targets, identify any necessary process or organizational changes needed to facilitate sustainability of the environment overall
This could involve something as simple as reviewing enterprise-level governance processes, or more structural changes in how the underlying technology footprint is managed
Guiding the Process and Setting Goals
A Small Amount of Good Data is Considerably Better than a Lot of Bad
As with any business situation, it’s tempting to assume that having more data is automatically a good thing. In the case of maintaining an asset inventory, the larger and more diverse an organization is, the more difficult it is to maintain the data with any accuracy. To that end, I’m a very strong believer in maintaining as little information as possible, doing deep dives into detail only as required to support design-level work.
As an example, we could start the process by identifying functional redundancies (at a category/component level) and spend allocations within and across portfolios as a means to surface overall savings opportunity and target areas for further analysis. That requires a critical, minimum set of data, but at a reasonable level of administrative overhead. Once specific target areas are identified and prioritized, further data gathering in the interest of comparing different solutions, performing gap analyses, and identifying candidate future state solutions can be done as a separate process. This approach is prioritizing going broad (to Define opportunities) versus going deep (to Design the solution), and I would argue it is a much more effective and efficient way to go about simplification, especially if the underlying footprint has any level of volatility where the more detailed information will become outdated relatively quickly.
Some things to consider:
Prioritize a critical, minimum set of data (primary functions served by an application, associated TCO, level of criticality, businesses/operating units supported, etc.) to understand spend allocation in relation to the operating and technology footprint
Deep dive into more particulars (functional differences across similar systems within a given category) as part of a specific design activity downstream of opportunity identification
Be Greedy, But Realistic
The simplification process is generally going to be iterative in nature, insofar as there may be a conceptual target for complexity and spend reduction/reallocation at the outset, some analysis is performed, the data provides insight on what is possible, the targets are adjusted, further analysis or implementation is performed, the picture is further refined, and so on.
In general, my experience is that there are always going to be issues in what you can practically pursue, and therefore, it is a good idea to overshoot your targets. By this, I mean that we should strive to identify more than our original savings goals because if we limit the level of opportunities we identify to a preconceived goal or target, we may either suboptimize the business outcome if things go well, or fall short of expectations in the event we are able to pursue only a subset of what is originally identified for various business, technology, or implementation-related issues.
Some things to consider:
Review opportunities, asking what would be different if you could only pursue smaller, incremental efforts, had a target that was twice what you’ve identified, could start from scratch and completely redefine your footprint with an “optimal case” in mind… and consider what, if anything would change about your scope and approach
Planning and Governance
Approach Matters
Part of the challenge with simplification is knowing where to begin. Do you cover all of the footprint, the fringe (lower priority assets), the higher cost/core systems? The larger an organization is, the more important it is to target the right opportunities quickly in your approach and not try to boil the ocean. That generally doesn’t work.
I would argue that the primary question to understand in terms of targeting a starting point is where you are overall from a business standpoint. The first iteration of any new process tends to generate learnings and improvements, so there will be more disruption than expected the first time you execute the process end-to-end. To that point, if there is a significant amount of business risk to making widespread, foundational changes, it may make sense to start on lower risk, clean up type activities on non-core/supporting applications (e.g., Treasury, Tax, EH&S, etc.) by comparison with core solutions (like an ERP, MES, Underwriting, Policy Admin, etc.). On the other hand, if simplification is meant to help streamline core processes, enable speed-to-market and competitive advantage, or some form of business growth, it could be that focusing on core platforms first is the right approach to take.
The point is that the approach should not be developed independent of the overall business environment and strategy, they need to align with each other.
Some things to consider:
As part of the application portfolio analysis, understand the business criticality of each application, level of planned changes and enhancements, how those enable upcoming strategic business goals, etc.
Consider how the roadmap will enable business outcomes over time; whether that is ideally a slow, build process of incremental gains, or more of a big bets, high impact changes that materially affect business value and IT spend
Accuracy is More Important Than Precision
This point may seem to contradict what I wrote earlier in terms of having a smaller amount of good data, but the point here is that it’s important to acknowledge in a transformation effort that there is a directly proportional relationship between the degree of change involved in the effort and the associated level of uncertainty in the eventual outcome. Said differently: the more you change, the less you can predict the result with any precision.
This is true because there is a limited level of data generally available in terms of the operating impact of changes to people, process, and technology. Consequently, the more you change in terms of one or more of those elements, your ability to predict the exact outcome from a metrics standpoint (beyond a more anecdotal/conceptual level) will be limited. In line with the concepts that I shared in the recent “Intelligent Enterprise 2.0” series, with orchestration and AI, I believe we can gather, analyze, and leverage a greater base of this kind of data, but the infrastructure to do this largely doesn’t exist in most organizations I’ve seen today.
Some things to consider:
Be mindful not to “overanalyze” the impact of process changes up front in the simplification effort. The business case will generally be based on the overall reduction in assets/ complexity, changes in TCO, and shifts (or reductions) in staffing levels from the current state
It is very difficult to predict the end state when a large number of applications are transitioned as part of a simplification program, so allow for a degree of contingency in the planning process (in schedule and finances) rather than spending time. Some things that may not appear critical generally will reveal themselves to be only in implementation, some applications that you believe you can decommission will remain for a host of reasons, and so on. The best laid plans on paper rarely prove out exactly in the course of execution depending on the complexity of the operating environment and culture in place
Expect Resistance and Expect a Mess
Any large program in my experience tends to go through an “optimism” phase, where you identify a vision and fairly significant, transformative goal, the business case and plan looks good, it’s been vetted and stakeholders are aligned, and you have all the normal “launch” related events that generate enthusiasm and momentum towards the future… and then reality sets in, and the optimism phase ends.
Having written more than once on Transformation, the reality is that it is messy and challenging, for a multitude of reasons, starting with patience, adaptability, and tenacity it takes to really facilitate change at a systemic level. Status quo feels safe and comforting, it is known, and upsetting that reality will necessarily lead to friction, resistance, and obstacles throughout the process.
Some things to consider:
Set realistic goals for the program at the outset, acknowledge that it is a journey, that sustainable change takes time, the approach will evolve as you deliver and learn, and that engagement, communication, and commitment are the non-negotiables you need throughout to help inform the right decisions at the right time to promote success
Plan with the 30-, 60-, and 90-day goals in mind, but acknowledge that any roadmap beyond the immediate execution window will be informed by delivery and likely evolve over time. I’ve seen quite a lot of time wasted on detailed planning more than one year out where a goal-based plan with conceptual milestones would’ve provided equal value from a planning and CBA standpoint
Govern Efficiently and Adjust Responsively
Given the scale and complexity of simplification efforts, it would be relatively easy to “over-report” on a program of this type and cause adverse impact on the work itself. In line with previous articles that I’ve written on governance and transparency, my point of view is that the focus needs to be on enabling delivery and effective risk management, not administrative overhead.
Some things to consider:
Establish a cadence for governance early on to review ongoing delivery, identify interventions and support needed, learnings that can inform future planning, and adjust goals as needed
Large programs succeed or fail in my experience based on maintaining good transparency to where you are, identifying course corrections when needed, and making those adjustments quickly to minimize the cost of “the turns” when they inevitably happen. Momentum is so critical in transformation efforts that minimizing these impacts is really important to keeping things on track
Wrapping Up
Overall, the reason for separating the process from the change in addressing simplification was deliberate, because both aspects matter. You can have a thoughtful, well executed process and accomplish nothing in terms of change and you can equally be very mindful of the environment and changes you want to bring about, but the execution model needs to be solid, or you will lose any momentum and good will you’ve built in support of your effort.
Ultimately, recognizing that you’re both engaging in a change and a delivery activity is the critical takeaway. Most medium- to large-scale environments end up complex for a host of reasons. You can change the footprint, but you need to change the environment as well, or it’s only a matter of time before you’ll find yourself right back where you started, perhaps with a different set of assets, but a lot of same problems you had in the first place.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
“Something needs to change… we’re not growing, we’re too slow, we’re not competitive, we need to manage costs…”
Change is an inevitable reality in business. Even the most successful company will face adversity, new competitors, market shifts, evolving customer needs, expense pressures, near-term shareholder expectations, etc. While it’s important to focus on adjusting course and remedying issues, the question is whether you will find yourself in exactly the same (or at least a relatively similar) situation again (along with how quickly) and that comes down to leadership and culture.
Culture issues tend to beget other business issues, whether it’s delayed or misaligned decisions, complacency and a lack of innovation, a lack of collaboration and cooperation, risk averse behaviors, redundant or ineffective solutions, high turnover resulting in lost expertise, and so on. The point is that excellence needs to start “at the top” and work its way throughout an organization, andthe mechanism for that proliferation is culture.
The focus of this article will be to explore what it takes to evolve culture in an organization, and to provide ways to think about what can happen when it isn’t positioned or aligned effectively.
It Starts with the Right Intentions
Conformance versus Performance
Before attempting to change anything, the fundamental question to be asked is why you want to have a culture in place to begin with? Certainly, over the course of time and multiple organizations (and clients), I’ve seen culture emphasized to varying degrees, from where it is core to a company’s DNA, to where it is relegated to a poster, web page, or item on everyone’s desk that is seldom noticed or referenced.
In cases where it’s rarely referenced, there is missed opportunity to establish mission and purpose, rallying people around core concepts that can facilitate an effective work environment.
That being said, focusing on culture doesn’t necessarily create a greater good in itself, as I’ve seen environments where culture is used in almost a punitive way, suggesting there are norms to which everyone must adhere and specific language everyone needs to use, or there will be adverse consequences.
That isn’t about establishing a productive work environment, it’s about control and conformance, and that can be toxic when you understand the fundamental issue it represents: employees aren’t trusted enough to do the right thing, be empowered, and enabled to act, so there needs to be a mechanism in place to drive conformity, enforce “common language”, and isolate those who don’t fit the mold to create a more homogenous organizational society.
So, what happens to innovation, diversity, and inclusion in these environments? It’s suppressed or destroyed, because the capabilities and gifts of the individual are lost to the push towards a unified, homogenized whole. That is a fairly extreme outcome of such authoritarian environments, but the point is that a strong culture is not, in itself, automatically good if the focus is control and not performance and excellence.
I’ve written multiple articles on culture and values that I believe are important in organizations, so I won’t repeat those messages here, but the goal of establishing culture should be fostering leadership, innovation, growth, collaboration, and optimizing the contribution of everyone in an organization to serve the greater good. If that doesn’t apply in equal measure to every employee, based on their individual capabilities and experience, that’s fine from my perspective, so long as they don’t detract from the performance of others in the process. The point is that culture isn’t about the words on the wall, it’s about the behaviors that you are aspiring to engender within an organization and the degree to which you live into them every day.
Begin with Leadership
Words and Actions
It is fairly obvious to say that culture needs to start “at the top” and work its way outward, but there are so many issues I’ve seen over time in this step alone, that it is worth repeating.
It is not uncommon for leaders to speak in town hall meetings or public settings and proclaim the merits of the company culture, asking others to follow the core values or principles as outlined, to the betterment of themselves and everyone else (customers and others included as appropriate). Now, the question is: what happens when that person returns to their desk and makes their next set of decisions? This is where culture is measured, and employees notice everything over time.
The challenge for leaders who want excellence and organizational performance is to take culture to heart and do what they can to live into it, even in the most difficult circumstances, which is where it tends to be needed the most. I remember hearing a speaker suggest that the litmus test of the strength of your commitment to culture could be expressed in whether you would literally walk away from business rather than compromise your values. That’s a pretty difficult bar to set in my experience, but an interesting way to think about the choices we make and their relative consequence.
Aligning Incentives versus Values
Building on the previous point, there is a difference between behaviors and values. The latter is what you believe and prioritize, the former is how you act. Behaviors are directly observable; values are indirectly observed through your words and actions.
Why is this important in the context of culture? It is important, because you can incent people in the interest of influencing their behavior, but you can’t change someone’s values, no matter how you incent them. To the extent you want to set up a healthy, collaborative culture and there are individual motivations that don’t align with doing the right thing, organizational performance will suffer in some way, and the more senior the individual(s) are in the organization, the more significant the impact will likely be.
This point ultimately comes down to doing the right level of due diligence during the hiring process, but also being willing to make difficult decisions during the performance management process, because sometimes individual performers with unhealthy behaviors cause a more significant impact than is evident without some level of engagement and scrutiny from a leadership standpoint.
Have a Thoughtful Approach
Incubate -> Demonstrate -> Extend
As the diagram above suggests, culture doesn’t change overnight, and being deliberate in the approach to change will have a significant impact to how effective and sustainable it is.
In general, the approach I’d recommend is to start from “center” with leadership, raise awareness, educate on the intent and value to the changes proposed, and incubate there. Broader communication in terms of the proposed shift is likely useful in preparing the next group to be engaged in the process, but the point is to start small, begin “living into” the desired model, evaluating its efficacy, and demonstrating the value it can create, THEN extend to the next (likely adjacent) set of people, and repeat the process over and over until the change has fully proliferated to the entire organization. The length of any given iteration would likely vary depending on the size of the employee population and the degree of change involved (more substantial = longer windows of time), but the point is to be conscious and deliberate in how it is approached so adjustments can be made along the way and to enable leaders to understand and internalize the “right” set of behaviors before expecting them to help advocate and reinforce it in others.
An Example (Building an Architecture Capability)
To provide a simple example, when trying to establish an architecture capability across an organization, it would need to ultimately span from the central enterprise architecture team down to technical leads on individual implementation teams. It would be impractical to implement the model all at once, so it would be more effective to stage it out, working from the top-down, first defining roles and responsibilities across the entire operating model, but then implementing one “layer” of roles at a time, until it is entirely in place.
Since architects are generally responsible for technical solution quality, but not execution, the deployment of the model would need to follow two coordinated paths: building the architecture capability itself and aligning it with the delivery leadership with which it is meant to collaborate and cooperate (e.g., project and program managers). Trying to establish the role without alignment and support from people leading and participating on delivery teams likely would fail or lead to ineffective implementation, which is another reason why a more thoughtful and deliberate approach to the change is required.
What does this have to do with culture? Well, architecture is fundamentally about solution quality in technology, reuse, managing complexity and cost of ownership, and enabling speed and disciplined innovation. Establishing roles with an accountability for quality will test the appetite within an organization when it comes to working beyond individual project goals and constraints to looking at more strategic objectives for simplification, speed, and reuse. Where courageous leadership and the right culture are not in place, evolving the IT operating model will be considerably more difficult, likely at every stage of the process.
Manage Reality
To this point, I’ve addressed change in a fairly uniform and somewhat idealistic manner, but reality is often quite different, so I wanted to explore a couple situations and how I think about the implications of each.
Non-Uniform Execution
So, what happens when you change culture within your team, but it doesn’t extend into those who work directly with you? It depends on the nature of the change itself, of course, but likely the farther “out” from center you go, the more difficult it will be for your team to capitalize on whatever the intended benefits of the change were intended to be.
My assumptions here are in relation to medium- to larger-scale organizations, where the effects are magnified and it is impractical to “be everywhere, all the time” to engage in ways that help facilitate the desired change.
In the case that there isn’t broader alignment to whatever cultural adjustments you want to make within your team, depending on the degree of difference to the broader company culture, it may be necessary to clarify how “we operate internally” versus how “we engage with others”. The goal of drawing out that separation would be to try and drive performance improvement within your team, but not waste energy and create friction in your external interactions.
There is a potential risk in having teams with a very different culture than the broader organization if it creates an environment where there becomes an “us and them” mentality or a special treatment situation where that team demonstrates unhealthy organizational behaviors or is held to different standards than others. Ultimately those situations cause larger issues and should be avoided where possible.
Handling Disparate Cultures
Unlike the previous situation where there is one broader culture and a team operates in a slightly different manner; I’ve also seen situations where groups operate with very different cultures within the same overall organization and it can create substantial disconnects if not addressed effectively. When not addressed there can be a lot of internal friction, competition, and a lack of effective collaboration, which will hinder performance in one or more ways over time.
One way to manage a situation where there are multiple distinct cultures within a single organization, would first be to look for some level of core, universally accepted operating principles that can be applied to everyone, but then to focus entirely on the points of engagement across organizations, clarify roles and responsibilities for each constituent group, and manage to those dependencies the same as you would if working with a third-party provider or partner. The overall operating performance opportunity may not be fully realized, but this kind of approach could be used to provide clarity of expectations and reduce friction points to a large degree.
Wrapping Up
The purpose of this article was to come back to a core element that makes organizations successful over time, and that’s culture. To the degree that there are gaps or issues, it is always possible to adapt and evolve, but it takes a thoughtful approach, the right leadership, and time to make sustainable change. In my opinion, it is time worth spending to the degree that performance and excellence is your goal. It will never be “perfect” for many reasons, but thinking about how you establish, reinforce, and evolve it in a disciplined way can be the difference to remaining agile, competitive, and successful overall.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
This is a question that I’ve heard asked many times, particularly when there is a strategic initiative or transformation effort being kicked off. Normally, the answer is an enthusiastic “Yes”, because most programs start with a lot of optimism (which is a good thing), but not always a full understanding of risk. The question is… How do you know whether you have the necessary capabilities to deliver?
In any type of organization, there is a blend of skills and experience, whether that is across a leadership team or within an individual team itself. Given that reality and the ongoing nature of organizations to evolve, realign, and reorganize, it is not uncommon to leverage some form of evaluation (such as a Kolbe assessment) to understand the natural strengths, communication, or leadership styles of various individuals to help facilitate understanding and improve collaboration.
But what about knowledge and experience? This part I haven’t seen done as often partially because, if not done well, it can lead to a cumbersome and manually intensive process that doesn’t create value.
The focus of this article is to suggest a means to understand and evaluate the breadth of knowledge and skills across a team. To the extent we can visualize collective capability, it can be a useful tool to inform various things from a management standpoint, which are outlined in the second section below.
Necessary caveats: The example used is not meant to be prescriptive or exhaustive and this activity doesn’t need to be focused on IT alone. The goal in the illustration used here was to provide enough specificity to help the reader visualize the concept at a practical level, but the data is entirely made up and not meant to be taken as a representation of an actual set of people.
On the Approach
Thinking through the Dimensions
The diagram above breaks out 27 dimensions from a knowledge and skills standpoint, ranging from business understanding to operations and execution. The dimensions chosen for the purposes of this exercise don’t particularly matter, but I wanted to select a set that covered many of the aspects of an IT organization as a whole.
From an approach standpoint, the goal would be to identify what is being evaluated, select the right set of dimensions, define them, then determine “what good looks like” in terms of having a baseline for benchmarking (e.g., 10 means X, 8 means Y, 6 means Z, etc.). With the criteria established, one should then explain the activity to the group being evaluated, prepare a simple survey, and gather the data. The activity is meant to be rapid and directionally accurate, not to supplant individual performance evaluations, career development, or succession plans that should exist at a more detailed level. Ideally the dimensions should also align to the competency model for an organization, but the goal of this activity is directional, so that step isn’t critical if it requires too much effort.
Once data has been collected, individual results can be plotted in a spider graph like the one below to provide a perspective on where there are overlaps and gaps across a team.
Ways of Applying the Concept
With the individual inputs from a team having been provided, it’s possible to think about the data in two different respects: how it reflects individual capabilities, gaps, and overlaps as well as what it shows as the collective experience of the team as a whole (the green dotted outline above).
The data now being assembled, there are a number of ways to potentially leverage the information outlined below.
Talent Development: The strengths and gaps in any individual view can be used to inform individual development plans or identify education needs for the team as a whole. It can also be used to sanity check individual roles and accountability against the actual experience of individuals on the team. This isn’t to suggest rotations and “learn on the job” situations aren’t a good thing, but rather to raise awareness of those situations so that they can be managed proactively with the individual or the team as a whole. To the extent that a gap with one person is a strength in another, there could be cross-training opportunities that surface through the process.
Coordination and Collaboration: With overlaps and gaps visible across a team, there can be areas identified where individual team members see opportunities to consult with others who have a similar skillset, and also perhaps a different background that could surface different ways to approach and solve problems. In larger organizations, it can often be difficult to know “who to invite” to a conversation, where the default becomes inviting everyone (or making everyone ‘mandatory’ versus ‘optional’), which ultimately can lead to less productive or over-attended conversations that lack focus.
Leaders and Teams: In the representative data above, I deliberately highlighted areas where team members were not as experienced as the person leading the team, but the converse situation as well. In my experience, it is almost never the case that the leader is the most experienced in everything within the scope of what a team has to do. If that was the case, it could suggest that the potential of that team could be limited to that leader’s individual capabilities and vision, because others lack the experience to help inform direction. In the event that team members have more experience than their leader, there can also be opportunities for individuals to step up and provide direction, assuming the team leader creates space and a supportive environment for that occur. Again, the point of the activity is to identify and determine what, if anything, to do with these disparities where they exist.
Sourcing Strategy: Where significant gaps exist (e.g., there is no one with substantial AI experience in the example data above), these could be areas where finding a preferred partner with a depth of experience in the topic could be beneficial while internal talent is acquired or developed (to the extent it is deemed strategic to the organization).
Business Partnership: The visibility could serve as input to a partnership discussion to align expectations for where business leaders expect support and capability from their technology counterparts versus areas where they are comfortable taking the lead or providing direction. This isn’t always a very deliberate conversation in my experience, and sometimes that can lead to missed expectations in complex delivery situations.
Risk Management: One of the most important things to recognize about a visualization like this is not just what it shows about a teams’ capability, it’s also what isn’t there.
Using Donald Rumsfeld’s now famous concept:
There is known – something for which we have facts and experience
There is known unknown – something we know is needed, but which is not yet clear
And the pure unknown – something outside our experience, and therefore a blind spot
The last category is where we should also focus in an activity like this, because the less experience that exists individually and collectively in a leadership team, there will be a substantial increase in risk because there is a lack of awareness of all the “known unknowns” that can have a material impact on delivering solutions and operating IT. To the extent that a team is relatively inexperienced, no matter how motivated they may be, there is an increased probability that something will be compromised, whether that is cost, quality, schedule, morale, or something else. To that end, this tool can be an important mechanism to identify and manage risk.
Wrapping Up
Having recently written a fairly thorough set of articles on the future of enterprise technology, I wanted to back up and look at something a little less complex, but also with a focus on improving transparency and informing leadership discussions on risk, development, and coordination.
Whether through a mechanism like this or some other avenue, I believe there is value in understanding the breadth of capabilities that exist within a team and across a leadership group as a means for promoting excellence overall.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
A new leader in an organization once asked to understand my role. My answer was very simple: “My role is to change mindsets.”
I’m fairly sure the expectation was something different: a laundry list of functional responsibilities, goals, in-flight activities or tasks that were top of mind, the makeup of my team, etc. All relevant aspects of a job, to be sure, but not my primary focus.
I explained that my goal was to help transform the organization, and if I couldn’t change people’s mindsets, everything else that needed to be done was going to be much more difficult. That’s how it is with change.
Complacency is the enemy. Excellence is a journey and you are never meant to reach the destination.
Having been part of and worked with organizations that enjoyed tremendous market share but then encountered adversity and lost their advantage, there were common characteristics, starting with basking in the glow of that success too long and losing the hunger and drive that made them successful in the first place.
The remainder of this article will explore the topic further in three dimensions: leadership, innovation, and transformation in the interest of providing some perspective on the things to look for when excellence is your goal.
Fall short of excellence, you can still be great. Try to be great and fail? You’re going to be average… and who wants to be part of something average? No one who wants to win.
Courageous Leadership
As with anything, excellence has to start with leadership. There is always resistance and friction associated with change. That’s healthy and good because it surfaces questions and risks and, in a perfect world, the more points of view you can leverage in setting direction, the more likely you’ll avoid blind spots or avoidable mistakes just for a lack of awareness or understanding of what you are doing.
There is a level of discipline needed to accomplish great things over time and courage is a requirement, because there will inevitably be challenges, surprises, and setbacks. How leaders respond to that adversity, through their adaptability, tenacity and resilience will ultimately have a substantial influence on what is possible overall.
Some questions to consider:
Is there enough risk tolerance to create space to try new ideas, fail, learn, and try again?
Is there discipline in your approach so that business choices are thoughtful, reasoned, intentional, measured, and driven towards clear outcomes?
Is there a healthy level of humility to understand that, no matter how much success there is right now, without continuing to evolve, there will always be a threat of obsolescence?
Relentless Innovation
In my article on Excellence by Design, I was deliberate in choosing the word “relentless” in terms of innovation, because I’ve seen so many instances over time of the next silver bullet meant to be a “game changer”, “disruptor”, etc. only to see that then be overtaken by the next big thing a year or so later.
One of the best things about working in technology is thatit constantly gives us opportunities to do new things: to be more productive and effective, produce better outcomes, create more customer value, and be more competitive.
Some people see that as a threat, because it requires a willingness to continue to evolve, adapt, and learn. You can’t place too much value on a deep understanding of X technology, because tomorrow Y may come along and make that knowledge fairly obsolete. While there is an aspect of that argument that is true at an implementation level, it gives too much importance to the tools and not enough to the problems we’re ultimately trying to solve, namely creating a better customer experience, delivering a better product or service, and so on.
We need to plan like the most important thing right now won’tbe the most important 6 months or even a year from now. Assume we will wantto replace it, or integrate something new to work with it, improving our overall capability and creating even more value over time.
What does that do? In a disciplined environment, it should change our mindset about how we approach implementing new tools and technologies in the first place. It should also influence how much exposure we create in the dependencies we place upon those tools in the process of utilizing them.
To take what could be a fairly controversial example: I’ve written multiple articles on Artificial Intelligence (AI), how to approach it, and how I think about it in various dimensions, including where it is going. The hype surrounding these technologies is deservedly very high right now, there is a surge in investment, and a significant number of tools are and will be hitting the market. It’s also reasonable to assume a number of “agentic” solutions will pop up, meant to solve this problem and that… ok… now what happens then? Are things better, worse, or just different? What is the sum of an organization that is fully deployed with all of the latest tools? I don’t believe we have any idea and I also believe it will be terribly inefficient if we don’t ask this question right now.
As a comparison, what history has taught us is that there will be a user plugged into these future ecosystems somewhere, with some role and responsibilities, to work in concert (and ideally in harmony) with all this automation (physical and virtual) that we’ve brought to bear on everyone’s behalf. How will they make sense of it all? If we drop an agent for everything, is it any different than giving someone a bunch of new applications, all of which spit recommendations and notifications and alerts at them, saying “this is what you need to do”, but leaving them to figure out which of those disconnected pieces of advice make the most sense, which should be the priority, and try somehow not to be overwhelmed? Maybe not, because the future state might be a combination of intelligent applications (something I wrote about in The Intelligent Enterprise) and purpose-built agents that fill gaps those applications don’t cover.
Ok, so why does any of that matter? I’m not making an argument against experimenting and leveraging AI. My point is that, every time there is surge towards the next technology advancement, we seldom think about the reality that it will eventually evolve or be replaced by something else and we should take that into consideration as we integrate those new technologies to begin with. The only constant is change and that’s a good thing, but we also need to be disciplined in how we think about it on an ongoing basis.
Some questions to consider:
Is there a thoughtful and disciplined approach to innovation in place?
Is there a full lifecycle-oriented view when introducing new technologies, to consider how to integrate them so they can be replaced or to retire other existing, potentially redundant solutions once they are introduced?
Are the new technologies being vetted, reviewed, and integrated as part of a defined ecosystem with an eye towards managing technical debt over time?
Continual Transformation
In the spirit of fostering change, it is very common for a “strategy” conversation to be rooted in a vision. A vision sets the stage for what the future environment is meant to look like. It is ideally compelling enough to create a clear understanding of the desired outcome and to generate momentum in the pursuit of that goal (or set of goals)… and experience has taught me this is actually NOT the first or only thing important to consider in that first step.
Sustainable change isn’t just about having a vision, it is about having the right culture.
The process for strategy definition isn’t terribly complicated at an overall level: define a vision, understand the current state, identify the gaps, develop a roadmap to fill those gaps, execute, adapt, and govern until you’re done.
The problem is that large transformation efforts are extremely difficult to deliver. I don’t fundamentally believe that difficulty is often rooted in the lack of a clear vision or as simple as having execution issues that ultimately undermine success. I believe successful transformation isn’t a destination to begin with. Transformation should be a continual journey towards excellence.
How that excellence is manifest can be articulated through one or more “visions” that communicate concepts of the desired state, but that picture can and will evolve as capabilities available through automation, process, and organizational change occur. What’s most important is having courageous leadership and the innovation mindset mentioned above, but also a culture driven to sustain that competitive advantage and hunger for success.
Said differently: With the right culture, you can likely accomplish almost any vision, but only some visions will be achievable without the right culture.
Some questions to consider in this regard:
Is there a vision in place for where the organization is heading today?
What was the “previous” vision, what happened to it, did it succeed or fail and, if so, why?
Is the current change viewed as a “project” or a “different way of working”? (I would argue the latter is the desired state nearly in all cases)
Wrapping Up
Having shared the above thoughts, it’s difficult to communicate what is so fundamental to excellence, which is the passion it takes to succeed in the first place.
Excellence is a choice. Success is a commitment. It takes tenacity and grit to make it happen and that isn’t always easy or popular.
There is always room to be better, even in some of the most mundane things we do every day. That’s why courageous leadership is so important and where culture becomes critical in providing the foundation for longer-term success.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
First of all, know that character, values, and integrity matter. They are the foundation of who you are and the reputation you will have with others. Our beliefs and intentions often make their way to our words and actions, so strive to do what’s right, treat others with respect, take accountability for your choices, and know that, in the long term, those who bring kindness and a positive attitude into the world will succeed far more than those who don’t. They will also find themselves surrounded by many others, because a good heart and kindness are forces that will attract others to you over time.
Have faith, no matter what life brings. There will be times when life is challenging and it’s important to know that we are never alone, that God (in all His forms) has a plan, and we will find our way through, as long as we take one day at a time and keeping moving forward. There is a great quote from Winston Churchill, “When you’re going through Hell, keep going”, that I always remember in this regard. Faith is our greatest source of hope… and with hope, anything can be possible. With faith and hope, your possibilities in life will be limited only by your capacity to dream.
Work hard. It’s a simple point, but one that isn’t evident to everyone at a time that many seem to feel entitled. Earning your success is both an exercise in diligence and commitment as well as persistence and leadership. Oftentimes that effort is not glamorous, requires sacrifice, and will drag you through difficulty, but in struggling and overcoming those obstacles, we find out who we are and the strength we have inside us. No one can give that confidence and experience to you, you simply have to earn it, and it is well worth the effort over time.
Never stop learning. There is always something to understand about other people, new ideas or subjects, and the world around us. Always be looking for the people who can guide and advise you in the different aspects of your life. You will never reach a point where there isn’t an opportunity to grow as a person, and it will make you so much more aware, fulfilled, and worth knowing as time goes on.
Believe in yourself and speak your truth. In the great debate that life can be at times, you should know that your voice matters. At a time when so many take a free pass and just parrot the words, ideology, or biases of others, you do yourself and the world a service to educate yourself, form your own opinion, and respectfully speak your truth, including the times you speak for those who are afraid to do so on their own. Diversity in thought and opinion gives us strength and creates room for change. Let your voice be heard. You can make a difference.
Be humble and be kind. In concert with the previous point, strive to listen as well as you speak. Seek compassion and understanding, including those who differ most from you. They have their own form of truth, and it can be worth learning what that is, whether you agree with it or not. In a world consumed with egocentric thinking, what we do for others brings the world a little closer, creates the connections that bind us together, and reduces the divisiveness that so many waste their days promoting.
Never give up on your dreams but be ready to pursue new ones when you see them. Life can be like a series of bridges, taking you from one part of your journey to the next, and we often can’t see past the bridge that is immediately in front of us. While it takes tenacity and courage to pursue your life’s passion, understand that your goals will evolve as time progresses, and that’s not a bad thing.
Build upon your successes, learn from your failures. Remember that it’s relatively easy to succeed when you’re not doing anything worth doing or that’s not particularly difficult. Again, this is a relatively simple point, but it’s easy to lose the perspective that failures are a means to learn and become better, and they are definitely something that come with taking risks in life. There is no benefit to beating yourself up endlessly over your mistakes. Be thankful for the opportunity to learn and move forward, or likely life will give you the opportunity to learn that lesson again down the road.
Understand that true leaders emerge in adversity. Aspire to be the light that can lead others out of darkness to a better place, whether that is in your personal or professional life. It is easy to lead when everything is going well. It is when things go wrong that poor leaders assign blame and make excuses, and strong leaders take the reins, solve problems, and seek to inspire. It’s a choice that takes courage, but it’s worth remembering that it is also where character is built, reputations are made, and results are either accomplished or not.
Accept that life is rarely what we expect it to be. It’s the journey, along with its peaks and valleys, that makes it so worthwhile. Where possible, the best you can do for yourself and for others is to know when to set aside distractions, be present, and engage in the moments you have throughout your day. Make the most of the experience and don’t be a passenger in your own life.
Finally, take the time to express your care for those who matter to you. Life is unpredictable and you will never run out of love to give to others who are truly deserving of it. We spend far too much time waiting for “the right moment” when that time could be right now. Express your gratitude, express your love, express your support… both you and whomever is the recipient of those things will be better for it, and you will have an endless supply of those gifts available to give tomorrow as well, so no need to hold them in reserve.
I hope the words were helpful… all the best in the steps you take, in the choices you make, in finding happiness, and living the life of your dreams.
In my most recent articles on Approaching Artificial Intelligence and Transformation, I highlight the importance of discipline in achieving business outcomes. To that end, governance is a critical aspect of any large-scale transformation or delivery effort because it both serves to reduce risk and inform change on an ongoing basis, both of which are an inevitable reality of these kinds of programs.
The purpose of this article is to discuss ways to approach governance overall, to avoid common concerns, and to establish core elements that will increase the probability it will be successful. Having seen and established many PMOs and governance bodies over time, I can honestly say that they are difficult to put in place for as many intangible reasons as anything mechanical, hopefully the nature of which will be addressed below.
Have the Right Mindset
Before addressing the execution “dos” and “don’ts”, success starts with understanding that governance is about successful delivery, not pure oversight. Where delivery is the priority, the focus is typically on enablement and support. By contrast, where the focus is the latter, emphasis can be placed largely on controls and intervention. The reality is that both are needed, which will be discussed more below, but starting with an intention to help delivery teams generally should translate into a positive and supportive environment, where collaboration is encouraged. If, by comparison, the role of governance is relegated to finding “gotchas” and looking for issues without providing teams will guidance or solutions, the effort likely won’t succeed. Healthy relationships and trust are critical to effective governance, because they encourage transparent and open dialogue. Without that, likely the process will break down or be ineffective somewhere along the way.
In a perfect world, delivery teams should want to participate in a governance process because it helps them do their work.
Addressing the Challenges
Suggesting that you want to initiate a governance process can be a very uncomfortable conversation. As a consultant, clients can feel like it is something being done “to” them, with a third-party reporting on their work to management. As a corporate citizen, it can feel like someone is trying to exercise a level of control over their peers in a leadership team and, consequently, limiting individual autonomy and empowerment in some way. This is why relationships and trust are critically important. Governance is a partnership and it is about increasing the probability of successful outcomes, not adding a layer of management over people who are capable of doing their jobs with the right level of support.
That being said, three things are typically said when the idea of establishing governance is introduced: that it will slow things down, hinder value creation, and add unnecessary overhead to teams that are already “too busy” or rushing to a deadline. I’ll focus on each of these in turn, along with what can be done to address the concerns in how you approach things.
It Slows Things Down
As I wrote in my article on Excellence by Design, delivering at speed matters. Lack of oversight can lead to efforts going off the rails without the timely interventions and support that cause delays and budget overruns. That being said, if the process slows everything down, you aren’t necessarily helping teams deliver either.
A fundamental question is whether your governance process is meant to be a “gate” or a “checkpoint”.
In the case of a gate, they can be very disruptive, so there should be compliance or risk-driven concerns (e.g., security or data privacy) that necessitate stopping or delaying some or all of a project until certain defined criteria or standards are met. If a process is gated, then this should be factored into estimation and planning at the outset, so expectations are set and managed accordingly, and to avoid the “we don’t have time for this” discussion that otherwise could happen. Gating criteria and project debriefs / retrospectives should also be reviewed to ensure standards and guidelines are updated to help both mitigate risk and encourage accelerated delivery, which is a difficult balance to strike. In principle, the more disciplined an environment is, the less “gating” should be needed, because teams are already following standards, doing proper quality assurance, and so on, and risk management should be easier on an average effort.
When it comes to “checkpoints”, there should be no difference in terms of the level of standards and guidelines in place, it’s about how they are handled in the course of the review discussion itself. When critical criteria are missed in a gate, there is a “pause and adjust” approach, whereas a checkpoint would note the exception and requested remedy, ideally along with a timeframe for doing so. The team is allowed to continue forward, but with an explicit assumption that they will make adjustments so the overall solution integrity is maintained in line with expectations. This is where a significant amount of technical debt and delivery issues are created. There is a level of trust involved in a checkpoint process, because the delivery team may choose not to remediate any issues, in which case the purpose and value of standards can be undermined, and a significant amount of complexity and risk is introduced as a result. If this becomes a pattern over time, it may make sense to shift towards a more gated process if things like security, privacy, or other critical issues are being created.
Again, the goal of governance is to remove barriers, provide resources where required, and to enable successful delivery, but there is a handshake involved to the degree that the process integrity needs to be managed overall. My general point of view is to trust teams to do the right thing and to leverage a checkpoint versus a gated process, but that is predicated on ensuring standards and quality are maintained. To the delivery discipline isn’t where it needs to be, a stronger process may be appropriate.
It Erodes Value
To the extent that the process is perceived to be pure overhead, it is important to clarify the overall goals of the process and, to the extent possible, to identify some metrics that can be used to signal whether it is being effective in helping to promote a healthy delivery environment.
At an overall level, the process is about reducing risk, promoting speed and enablement, and increasing the probability of successful delivery. Whether that is measured in changes in budget and schedule variance, issues remediated pre-deployment, or by a downstream measure of business value created through initiatives delivered on time, there should be a clear understanding of what the desired outcomes are and a sanity check that they are being met.
Arguably, where standards are concerned, this can be difficult to evaluate and measure, but certainly the increase in technical debt that is created in an environment that lacks standards and governance, cost of operations, and percentage of effort directed and build versus run on an overall level can be monitored and evaluated.
It Adds Overhead
I remember taking an assignment to help clean up the governance of a delivery environment many years ago where the person leading the organization was receiving a stack of updates every week that was literally three feet of documents when printed, spanning hundreds of projects. It goes without saying that all of that reporting provided nothing actionable, beyond everyone being able to say that they were “reporting out” on their delivery efforts on an ongoing basis. It was also the case that the amount of time project and program managers were focused on updating all that documentation was substantial. This is not governance. This is administration and a waste of resources. Ultimately, by changing the structure of the process, defining standards, and level of information being reported, the outcome was a five-page summary that covered critical programs, ongoing maintenance, production, and key metrics that was produced with considerably less effort and provided much better transparency into the environment.
The goal of governance is providing support, not producing reams of documentation. Ideally, there should be a critical minimum amount of information requested from teams to support a discussion on what they are doing, where they are in the delivery process, the risks or challenges they are facing, and what help (if any) they may need. To the degree that you can leverage artifacts the team is already producing so there is little to no extra effort involved in preparing for a discussion, even better. And, as another litmus test, everything included in a governance discussion should serve a purpose and be actionable. Anything else likely is a waste of time and resources.
Making Governance Effective
Having addressed some of the common concerns and issues, there are also things that should be considered that increase the probability of success.
Allow for Evolution
As I mentioned in the opening, the right mindset has a significant influence on making governance successful. Part of that is understanding it will never be perfect. I believe very strongly in launching governance discussions and allowing feedback and time to mature the process and infrastructure given real experience with what works and what everyone needs.
One of the best things that can be done is to track and monitor delivery risks and technology-related issues and use those inputs to guide and prioritize the standards and guidelines in place. Said differently, you don’t need governance to improve things you already do well, you leverage it (primarily) to help you address risks and gaps you have and to promote quality.
Having seen an environment where a team was “working on” establishing a governance process over an extended period of time versus one that was stood up inside 30 days, I’d rather have the latter process in place and allow for it to evolve than one that is never launched.
Cover the Bases
In the previous section, I mentioned leveraging a critical minimum amount of information to facilitate the process, ideally utilizing artifacts a team already has. Again, it’s not about the process, it’s about the discussion and enabling outcomes.
That being said, since trust and partnership are important, even in a fairly bare bones governance environment, there should be transparency into what the process is, when it should be applied, who should attend, expectations of all participants, and a consistent cadence with which it is conducted.
It should be possible to have ad-hoc discussions if needed, but there is something contradictory to suggesting that governance is a key component to a disciplined environment and not being able to schedule the discussions themselves consistently. Anecdotally, when we conducted project review discussions in my time at Sapient, it was commonly understood that if a team was ever “too busy” to schedule their review, they probably needed to have it as soon as possible, so the reason they were overwhelmed or too busy was clear.
Satisfy Your Stakeholders
The final dimension to consider in making governance effective is understanding and satisfying the stakeholders surrounding it, starting with the teams. Any process can and should evolve, and that evolution should be based on experience obtained executing the process itself, monitoring operating metrics on an ongoing basis, and feedback that is continually gathered to make it more effective.
That being said, if the process never surfaces challenges and risks, it likely isn’t working properly, because governance is meant to do exactly that, along with providing teams with the support they need. Satisfying stakeholders doesn’t mean painting an unrealistically positive picture, especially if there are fundamental issues in the underlying environment.
I have seen situations where teams were encouraged to share inaccurate information about the health of their work in the interest of managing perceptions and avoiding difficult conversations that were critically needed. This is why having an experienced team leading the conversations and a healthy, supportive, and trusting environment is so important. Governance is needed because things do happen in delivery. Technology work is messy and complicated and there are always risks that materialize. The goal is to see them and respond before they have consequential impact.
Wrapping Up
Hopefully I’ve managed to hit some of the primary points to consider when establishing or evaluating a governance process. There are many dimensions, but the most important ones are first, focusing on value and, second, on having the right mindset, relationships, and trust. The process is too often the focus, and without the other parts, it will fail. People are at the center of making it work, nothing else.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
“I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.” – John F Kennedy, May 25, 1961
When JFK made his famous pronouncement in 1961, the United States was losing in the space race. The Soviet Union was visibly ahead, to the point that the government shuffled the deck, bringing together various agencies to form NASA, and set a target far out ahead of where anyone was focused at the time: landing on the Moon. The context is important as the U.S. was not operating from a position of strength and JFK didn’t shoot for parity or to remain in a defensive posture. Instead, he leaned in and set an audacious goal that redefined the playing field entirely.
I spoke at a town hall fairly recently about “The Saturn V Story”, a documentary that covers the space race and journey leading to the Apollo 11 moon landing on July 20, 1969. The scale and complexity of what accomplished in a relatively short timeframe was truly incredible and feels like a good way to introduce a Transformation discussion. The Apollo program engaged 375,000 people at its peak, required extremely thoughtful planning and coordination (including the Mercury and Gemini programs that preceded it), and presented a significant number of engineering challenges that needed to be overcome to achieve its ultimate goal. It’s an inspiring story, as any successful transformation effort should be.
The challenge is that true transformation is exceptionally difficult and many of these efforts fail or fall short of their stated objectives. The remainder of this article will highlight some key dimensions that I believe are critical in increasing the probability of success.
Transformation is a requirement of remaining competitive in a global digital economy. The disruptions (e.g., cloud computing, robotics, orchestration, artificial intelligence, cyber security exposure, quantum computing) have and will continue to occur, and success will be measured, in part, based on an organization’s ability to continuously transform, leveraging advanced capabilities to its’ maximum strategic benefit.
Successful Transformation
Culture versus Outcome
Before diving into the dimensions themselves, I want to emphasize the difference I see between changing culture and the kind of transformation I’m referencing in this article. Culture is an important aspect to affecting change, as I will discuss in the context of the dimensions themselves, but a change in culture that doesn’t lead to a corresponding change in results is relatively meaningless.
To that end, I would argue that it is important to think about “change management” as a way to transition between the current and desired ways of working in a future state environment, but with specific, defined outcomes attached to the goal.
It is insufficient, as an example, to express “we want to establish a more highly collaborative workplace that fosters innovation” without also being able to answer the questions: “To what end?” or “In the interest of accomplishing what?” Arguably, it is the desired outcome that sets the stage for the nature of the culture that will be required, both to get to the stated goal as well as to operate effectively once those goals are achieved. In my experience, this balance isn’t given enough thought when change efforts are initiated, and it’s important to make sure culture and desired outcomes are both clear and aligned with each other.
For more on the fundamental aspects of a healthy environment, please see my article on The Criticality of Culture.
What it Takes
Successful transformation efforts require focus on many levels and in various dimensions to manage what ultimately translates to risk.
The set that come to mind as most critical are having:
An audacious goal
Transformation is, in itself, a fundamental (not incremental) change in what an organization is able to accomplish
To the extent that substantial change is difficult, the value associated with the goal needs to outweigh the difficulties (and costs) that will be required to transition from where you are to where you need to be
If the goal also isn’t compelling enough, likely there won’t be the requisite level of individual and collective investment required to overcome the adversity that is typically part of these efforts. This is not just about having a business case. It’s a reason for people to care… and that level of investment matters where transformation is the goal
Courageous, committed leadership
Change is, by its’ nature, difficult and disruptive. There will be friction and resistance that comes from altering the status quo
The requirements of leadership in these efforts tend to be very high, because of the adversity and risk that can be involved, and a degree of fearlessness and willingness to ride through the difficulties is important
Where this level of leadership isn’t present, it will become easy to focus on obstacles versus solutions and to avoid taking risks that lead to suboptimized results or overall failure of the effort. If it was easy to transform, everyone would be doing it all the time
It is worth noting that, in the case of the Apollo missions, JFK wasn’t there to see the program through, yet it survived both his passing and significant events like the Apollo fire without compromising the goal itself
A question to consider in this regard: Is the goal so compelling that, if the vision holder / sponsor were to leave, the effort would still move forward? There are many large-scale efforts I’ve seen over the years where a change in leadership affects the commitment to a strategy. There may be valid reasons for this to be the case, but arguably both a worthy goal and strong leadership are necessary components in transformation overall
An aligned and supportive culture
There is a significant aspect of accomplishing a transformational agenda that places a burden on culture
On this point, the going-in position matters in the interest of mapping out the execution approach, because anything about the environment that isn’t conducive to facilitating and enabling collaboration and change will ultimately create friction that needs to be addressed and (hopefully) overcome
To the extent that the organization works in silos or that there is significant and potentially unhealthy internal competition within and across leaders, the implications of those conflicts need to be understood and mitigated early on (to the degree possible) so as to avoid what could lead to adverse impacts on the effort overall
As a leader said to me very early in my career, “There is room enough in success for everybody.” Defining success at an individual and collective level may be a worthwhile activity to consider depending on the nature of where an organization is when starting to pursue change
On this final point, I have been in the situation more than once professionally where a team worked to actively undermine transformation objectives because those efforts had an adverse impact to their broader role in an organization. This speaks, in part, to the importance of engaged, courageous leadership to bring teams into alignment, but where that leadership isn’t present, it definitely makes things more difficult. Said differently, the more established the status quo is, the harder it may resist change
A thoughtful approach
“Rome was not built in a day” is probably the best way to summarize this point
Depending on the level of complexity and degree of change involved, the more thought and attention that needs to be paid to planning out the approach itself
The Apollo program is a great example of this, because there were countless interim stages in the development of the Saturn V rocket, creating a safe environment for manned space flight, procedures for rendezvous and docking of the spacecraft, etc.
In a technology delivery environment, these can be program increments in a scaled Agile environment, selective “pilots” or “proof-of-concept” efforts, or interim deliveries in a more component-based (and service-driven) architecture. The overall point being that it’s important to map out the evolution of current to future state, allowing for testing and staging of interim goals that help reduce risk on the ultimate objectives
In a different example, when establishing an architecture capability in a large, complex organization, we established an operating model to define roles and responsibilities, but then operationalized the model in layers to help facilitate change with defined outcomes spread across multiple years. This was done purposefully and deliberately in the interest of making the changes sustainable and to gradually shift delivery culture to be more strategically-aligned, disciplined, and less siloed in the process
Agility and adaptiveness
The more advanced and innovative the transformation effort is, the more likely it will be that there is a higher degree of unknown (and knowledge risk) associated with the effort
To that end, it is highly probable that the approach to execution will evolve over time as knowledge gaps are uncovered and limitations and constraints need to be addressed and overcome
There are countless examples of this in the Apollo program, one of the early ones being the abandonment of the “Nova” rocket design, which involved a massive vehicle that ultimately was eliminated in deference to the multi-stage rocket and lunar lander / command module approach. In this case, the means for arriving at and landing on the moon was completely different than it was at the program’s inception, but the outcome was ultimately the same
I spend some time discussing these “points of inflection” in my article On Project Health and Transparency, but the important concept is not to be too prescriptive when planning a transformation effort, because execution will definitely evolve
Patience and discipline
My underlying assumption is that the level of change involved in transformation is significant and, as such, it will take time to accomplish
The balance to be struck is ultimately in managing interim deliveries in relation to the overall goals of the effort. This is where patience and discipline matter, because it is always tempting to take short cuts in the interest of “speed to market” while compromising fundamental design elements that are important to overall quality and program-level objectives (something I address in Fast and Cheap, Isn’t Good)
This isn’t to say that tradeoffs can’t or shouldn’t be made, because they often are, but rather that these be conscious choices, done through a governance process, and with a full understanding of the implications of the decisions on the ultimate transformation objectives
A relentless focus on delivery
The final dimension is somewhat obvious, but is important to mention, because I’ve encountered transformative efforts in the past that spent so much energy either on structural or theoretical aspects to their “program design” that they actually failed to deliver anything
In the case of the Apollo program, part of what makes the story so compelling is the number of times the team needed to innovate to overcome issues that arose, particularly to various design and engineering challenges
Again, this is why courageous, committed leadership is so important to transformation. The work is difficult and messy and it’s not for the faint of heart. Resilience and persistence are required to accomplish great things.
Wrapping Up
Hopefully this article has provided some areas to consider in either mapping out or evaluating the health of a transformational effort. As I covered in my article On Delivering at Speed, there are always opportunities to improve, even when you deliver a complex or high-risk effort. The point is to be disciplined and thoughtful in how you approach these efforts, so the bumps that inevitably occur are more manageable and the impact they have are minimized overall.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
Having had multiple recent discussions related to portfolio management, I thought I’d share some thoughts relative to disciplined operations, in terms of the aforementioned subject and on the associated toolsets as well. This is a substantial topic, but I’ll try to hit the main points and address more detailed questions as and when they arise.
In getting started, given all the buzz around GenAI, I asked ChatGPT “What are the most important dimensions of portfolio management in technology?” What was interesting was that the response aligned with most discussions I’ve had over time, which is to say that it provided a process-oriented perspective on strategic alignment, financial management, and so on (a dozen dimensions overall), with a wonderfully summarized description of each (and it was both helpful and informative). The curious part was that it missed the two things I believe are most important: courageous leadership and culture.
The remainder of this article will focus more on the process dimensions (I’m not going to frame it the same as ChatGPT for simplicity), but I wanted to start with a fundamental point: these things have to be about partnership and value first and process second. If the focus becomes the process, there is generally something wrong in the partnership or the process is likely too cumbersome in how it is designed (or both).
Portfolio Management
Partnership
Portfolio management needs to start with a fundamental partnership and shared investment between business and technology leaders on the intended outcome. Fortunately, or unfortunately, where the process tends to get the most focus (and part of why I’ve heard it so much in the last couple years) is in a difficult market/economy where spend management is the focus, and the intention is largely related to optimizing costs. Broadly speaking, when times are good and businesses grow, the processes for prioritization and governance can become less rigorous in a speed-to-market mindset, the demand for IT services increases, and a significant amount of inefficiency, delivery and quality issues can arise as a result. The reality is that discipline should always be a part of the process because it’s in the best interest of creating value (long- and short-term) for an organization. That isn’t to suggest artificial constraints, unnecessary gates in a process, or anything to hinder speed-to-market. Rather, the goal of portfolio management should be to have a framework in place to manage demand through delivery in a way that facilitates predictable, timely, and quality delivery and a healthy, secure, robust, and modern underlying technology footprint that creates significant business value and competitive advantage over time. That overall objective is just as relevant during a demand surge as it is when spending is constrained.
This is where courageous leadership becomes the other critical overall dimension. It’s never possible to do everything and do it well. The key is to maintain the right mix of work, creating the right outcomes, at a sustainable pace, with quality. Where technology leaders become order takers is where a significant amount of risk can be introduced that actually hurts a business over time. The primary results being that taking on too much without thoughtful planning can result in critical resources being spread too thin, missed delivery commitments, poor quality, and substantial technical debt, all of which eventually undermine the originally intended goal of being “responsive”. This is why partnership and mutual investment in the intended outcomes matters. Not everything has to be “perfect” (and the concept itself doesn’t really exist in technology anyway), but the point is to make conscious choices on where to spend precious company resources to optimize the overall value created.
End-to-End Transparency
Shifting focus from the direction to the execution, portfolio management needs to start with visibility in three areas:
Demand management – the work being requested
Delivery monitoring – the work being executed
Value realization – the impact of what was delivered
In demand management, the focus should ideally be on both internal and external factors (e.g., business priorities, customer needs, competitive and industry trends), a thoughtful understanding of the short- and long-term value of the various opportunities, the requirements (internal and external) necessary to make them happen, and the desired timeframe for those results to be achieved. From a process standpoint, notice of involvement and request for estimate (RFE) processes tend to be important (depending on the scale and structure of an organization), along with ongoing resource allocation and forecast information to evaluate these opportunities as they arise.
Delivery monitoring is important, given the dependencies that can and do exist within and across efforts in a portfolio, the associated resource needs, and the expectations they place on customers, partners, or internal stakeholders once delivered. As and when things change, there should be awareness as to the impact of those changes on upcoming demand as well as other efforts within a managed portfolio.
Value realization is a generally underserved, but relatively important part of portfolio management, especially in spending constrained situations. This level of discipline (at an overall level) is important for two primary reasons: first, to understand the efficacy of estimation and planning processes in the interest of future prioritization and planning and, second, to ensure investments were made effectively in the right priorities. Where there is no “retrospective”, a lot of learnings may be being lost in the interest of continuous improvement and operational efficiency and effectiveness over time (ultimately having an adverse impact on business value created).
Maintaining a Balanced Portfolio
Two concepts that I believe are important to consider in how work is ultimately allocated/prioritized within an IT portfolio:
Portfolio allocation – the mix of work that is being executed on an ongoing basis
Prioritization – how work is ultimately selected and the process for doing so
A good mental model for portfolio allocation is a jigsaw puzzle. Some pieces fit together, others don’t, and whatever pieces are selected, you ultimately are striving to have an overall picture that matches what you originally saw “on the box”. While you also can operate in multiple areas of a puzzle at the same time, you also generally can’t focus in on all of them concurrently and expect to be efficient on the whole.
What I believe a “good” portfolio should include is four key areas (with an optional fifth):
Innovation – testing and experimenting in areas where you may achieve significant competitive advantage or differentiation
Business Projects – developing solutions that create or enable new or enhanced business capabilities
Modernization – using an “urban renewal” mindset to continue to maintain, simplify, rationalize, and advance your infrastructure to avoid significant end of life, technical debt, or other adverse impacts from an aging or diverse technology footprint
Security – continuing to leverage tools and technologies that manage the ever increasing exposure associated with cyber security threats (internal and external)
Compliance (where appropriate) – investing in efforts to ensure appropriate conformance and controls in regulatory environments / industries
I would argue that, regardless of the level of overall funding, these categories should always be part of an IT portfolio. There can obviously be projects or programs that provide forward momentum in more than one category above, but where there isn’t some level of investment in the “non-business project” areas, likely there will be a significant correction needed at some point of time that could be very disruptive from a business standpoint. It is probably also worth noting that I am not calling out a “technology projects” category above on purpose. From my perspective, if a project doesn’t drive one of the other categories, I’d question what value it creates. There is no value in technology for technology’s sake.
From a prioritization standpoint, I’ve seen both ends of the spectrum over the course of time: environments where there is no prioritization in place and everything with a positive business case (and even some without) are sent into execution to ones where there is an elaborate “scoring” methodology, with weights and factors and metrics organized into highly elaborate calculations that create a false sense of “rigor” in the efficacy of the process. My point of view overall is that, with the above portfolio allocation model in place, ensuring some balance in each of the critical categories of spend, a prioritization process should include some level of metrics, with an emphasis on short- and long-term business/financial impact as well as a conscious integration of the resource commitments required to execute the effort by comparison with other alternatives. As important as any process, however, is the discussions that should be happening from a business standpoint to ensure the engagement, partnership, and overall business value being delivered through the portfolio (the picture on the box) in the decisions made.
Release Management
Part of arriving at the right set of work to do also comes down to release management. A good analogy for release management is the game Tetris. In Tetris, you have various shaped blocks dropping continually into a grid, with the goal of rotating and aligning them to fit as cleanly with what is already on the radar as possible. There are and always will be gaps and the fit will never be perfect, but you can certainly approach Tetris in a way that is efficient and well-aligned or in a way that is very wasteful of the overall real estate with which you have to work.
This is great mental model for how project planning should occur. If you do a good job, resources are effectively utilized, outcomes are predictable, there is little waste, and things run fairly smoothly. If you don’t think about the process and continually inject new work into a portfolio without thoughtful planning as to dependencies and ongoing commitments, there can and likely will be significant waste, inefficiency, collateral impact, and issues in execution.
Release management comes down to two fundamental components:
Release strategy – the approach to how you organize and deliver major and minor changes to various stakeholder groups over time
Release calendar – an ongoing view of what will be delivered at various times, along with any critical “T-minus” dates and/or delivery milestones that can be part of a progress monitoring or gating process used in conjunction with delivery governance processes
From a release strategy standpoint, it is tempting in a world of product teams, DevSecOps, and CI/CD pipelines to assume everything comes down to individual product plans and their associated release schedules. The two primary issues here are the time and effort it generally takes to deploy new technology and the associated change management impact to the end users who are expected to adopt those changes as and when they occur. The more fragmented the planning process, the more business risk there is that ultimately end users or customers will be either under or overserved at any given point in time, where a thoughtful release strategy can help create predictable, manageable, and sustainable levels of change over time across a diverse set of stakeholders being served.
The release calendar, aside from being an overall summary of what will be delivered when and to whom, also should ideally provide transparency into other critical milestones in the major delivery efforts so that, in the event something moves off plan (which is a very normal occurrence in technology and medium to larger portfolios), the relationship to other ongoing efforts can be evaluated from a governance standpoint to determine whether any rebalancing or slotting of work is required.
Change Management
While I won’t spend a significant amount of time on this point, change management is often an area where I’ve seen the process managed very well and relatively poorly. The easy part is generally managing change relative to a specific project or program and that governance often exists in my experience. The issue that can arise is when the leadership overseeing a specific project is only taking into account the implications of change on that effort alone, and not the potential ripple effect of a schedule, scope, or financial adjustment on the rest of the portfolio, future demand, or on end users in the event that releases are being adjusted.
On Tooling
Pivoting from processes to tools, at an overall level, I’m generally not a fan of over-engineering the infrastructure associated with portfolio management. It is very easy for such an infrastructure to take a life of its own, become a significant administrative burden that creates little value (beyond transparency), or contain outdated and inaccurate information to the degree that the process involves too much data without underlying ownership and usage of the data obtained.
The goal is the outcome, not the tools.
To the extent that a process is being established, I’d generally want to focus on transparency (demand through delivery) and a healthy ongoing discussion of priorities in the interest of making informed decisions. Beyond that, I’ve seen a lot of reporting that doesn’t generally result in any level of actions being taken, which I consider to be very ineffective from a leadership and operational standpoint.
Again, if the process is meant to highlight a relationship problem, such as a dashboard being created requiring a large number of employees to capture timesheets to be rolled up, marked to various projects, all to have a management discussion to say “we’re over allocated and burning out our teams”, my question would be why all of that data and effort was required to “prove” something, whether there is actual trust and partnership, whether there are other underlying delivery performance issues, and so on. The process and tools are there to enable effective execution and the creation of business value, not drain effort and energy that could better be applied in delivery with administrivia.
Wrapping Up
Overall, having spent a number of years seeing well developed and executed processes as well as less robust versions of the same, effective portfolio management comes down to value creation. When the focus becomes about the process, the dashboard, the report, the metrics, something is amiss in my experience. It should about informing engaged leadership, fostering partnership, enabling decisions, and creating value. That is not to say that average utilization of critical resources (as an example) isn’t a good thing to monitor and keep in mind, but it’s what you do with that information that matters.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
I’ve been thinking about writing this article for a while, with the premise of “what does IT look like in the future?” In a digital economy, the role of technology in The Intelligent Enterprise will certainly continue to be creating value and competitive business advantage. That being said, one can reasonably assume a few things that are true today for medium to large organizations will continue to be part of that reality as well, namely:
The technology footprint will be complex and heterogenous in its makeup. To the degree that there is a history of acquisitions, even more so
Cost will always be a concern, especially to the degree it exceeds value delivered (this is explored in my article on Optimizing the Value of IT)
Agility will be important in adopting and integrating new capabilities rapidly, especially given the rate of technology advancement only appears to be accelerating over time
Talent management will be complex given the variety of technologies present will be highly diverse (something I’ve started to address in my Workforce and Sourcing Strategy Overview article)
My hope is to provide some perspective in this article on where I believe things will ultimately move in technology, in the underlying makeup of the footprint itself, how we apply capabilities against it, and how to think about moving from our current reality to that environment. Certainly, all of the five dimensions of what I outlined in my article on Creating Value Through Strategy will continue to apply at an overall strategy level (four of which are referenced in the bullet points above).
A Note on My Selfish Bias…
Before diving further into the topic at hand, I want to acknowledge that I am coming from a place where I love software development and the process surrounding it. I taught myself to program in the third grade (in Apple Basic), got my degree in Computer Science, started as a software engineer, and taught myself Java and .Net for fun years after I stopped writing code as part of my “day job”. I love the creative process for conceptualizing a problem, taking a blank sheet of paper (or white board), designing a solution, pulling up a keyboard, putting on some loud music, shutting out distractions, and ultimately having technology that solves that problem. It is a very fun and rewarding thing to explore those boundaries of what’s possible and balance the creative aspects of conceptual design with the practical realities and physical constraints of technology development.
All that being said, insofar as this article is concerned, when we conceptualize the future of IT, I wanted to put a foundational position statement forward to frame where I’m going from here, which is:
Just because something is cool and I can do it, doesn’t mean I should.
That is a very difficult thing to internalize for those of us who live and breathe technology professionally. Pride of authorship is a real thing and, if we’re to embrace the possibilities of a more capable future, we need to apply our energies in the right way to maximize the value we want to create in what we do.
The Producer/Consumer Model
Where the Challenge Exists Today
The fundamental problem I see in technology as a whole today (I realize I’m generalizing here) is that we tend to want to be good at everything, build too much, customize more than we should, and throw caution to the wind when it comes to things like standards and governance as inconveniences that slow us down in the “deliver now” environment in which we generally operate (see my article Fast and Cheap, Isn’t Good for more on this point).
Where that leaves us is bloated, heavy, expensive, and slow… and it’s not good. For all of our good intentions, IT doesn’t always have the best reputation for understanding, articulating, or delivering value in business terms and, in quite a lot of situations I’ve seen over the years, our delivery story can be marred with issues that don’t create a lot of confidence when the next big idea comes along and we want to capitalize on the opportunity it presents.
I’m being relatively negative on purpose here, but the point is to start with the humility of acknowledging the situation that exists in a lot of medium to large IT environments, because charting a path to the future requires a willingness to accept that reality and to create sustainable change in its place. The good news, from my experience, is there is one thing going for most IT organizations I’ve seen that can be a critical element in pivoting to where we need to be: a strong sense of ownership. That ownership may show up as frustration in the status quo depending on the organization itself, but I’ve rarely seen an IT environment where the practitioners themselves don’t feel ownership for the solutions they build, maintain, and operate or have a latent desire to make them better. There may be a lack of a strategy or commitment to change in many organizations, but the underlying potential to improve is there, and that’s a very good thing if capitalized upon.
Challenging the Status Quo
Pivoting to the future state has to start with a few critical questions:
Where does IT create value for the organization?
Which of those capabilities are available through commercially available solutions?
To what degree are “differentiated” capabilities or features truly creating value? Are they exceptions or the norm?
Using an example from the past, a delivery team was charged with solving a set of business problems that they routinely addressed through custom solutions, even though the same capabilities could be accomplished through integration of one or more commercially available technologies. From an internal standpoint, the team promoted the idea that they had a rapid delivery process, were highly responsive to the business needs they were meant to address, etc. The problem is that the custom approach actually cost more money to develop, maintain, and support, was considerably more difficult to scale. Given solutions were also continually developed with a lack of standards, their ability to adopt or integrate any new technologies available on the market was non-existent. Those situations inevitably led to new custom solutions and the costs of ownership skyrocketed over time.
This situation begs the question: if it’s possible to deliver equivalent business capability without building anything “in house”, why not do just that?
In the proverbial “buy versus build” argument, these are the reasons I believe it is valid to ultimately build a solution:
There is nothing commercially available that provides the capability at a reasonable cost
I’m referencing cost here, but it’s critical to understand the TCO implications of building and maintaining a solution over time. They are very often underestimated.
There is a commercially available solution that can provide the capability, but something about privacy, IP, confidentiality, security, or compliance-related concerns makes that solution infeasible in a way that contractual terms can’t address
I mention contracting purposefully here, because I’ve seen viable solutions eliminated from consideration over a lack of willingness to contract effectively, and that seems suboptimal by comparison with the cost of building alternative solutions instead
Ultimately, we create value in business capability enabled through technology, “who” built them doesn’t matter.
Rethinking the Model
My assertion is that we will obtain the most value and acceleration of business capabilities when we shift towards a producer/consumer model in technology as a whole.
What that suggests is that “corporate IT” largely adopts the mindset of the consumer of technologies (specifically services or components) developed by producers focused purely on building configurable, leverageable components that can be integrated in compelling ways into a connected ecosystem (or enterprise) of the future.
What corporate IT “produces” should be limited to differentiated capabilities that are not commercially available, and a limited set of foundational capabilities that will be outlined below. By trying to produce less and thinking more as a consumer, this should shift the focus internally towards how technology can more effectively enable business capability and innovation and externally towards understanding, evaluating, and selecting from the best-of-breed capabilities in the market that help deliver on those business needs.
The implication, of course for those focused on custom development, would be to move towards those differentiated capabilities or entirely towards the producer side (in a product-focused environment), which honestly could be more satisfying than corporate IT can be for those with a strong development inclination.
The cumulative effect of these adjustments should lead to an influx of talent into the product community, an associated expansion of available advanced capabilities in the market, and an accelerated ability to eventually adopt and integrate those components in the corporate environment (assuming the right infrastructure is then in place), creating more business value than is currently possible where everyone tries to do too much and sub-optimizes their collective potential.
Learning from the Evolution of Infrastructure
The Infrastructure Journey
You don’t need to look very far back in time to remember when the role of a CTO was largely focused on managing data centers and infrastructure in an internally hosted environment. Along the way, third parties emerged to provide hosting services and alleviate the need to be concerned with routine maintenance, patching, and upgrades. Then converged infrastructure and the software-defined data center provided opportunities to consolidate and optimize that footprint and manage cost more effectively. With the rapid evolution of public and private cloud offerings, the arguments for managing much of your own infrastructure beyond those related specifically to compliance or legal concerns are very limited and the trajectory of edge computing environments is still evolving fairly rapidly as specialized computing resources and appliances are developed. The learning being: it’s not what you manage in house that matters, it’s the services you provide relative to security, availability, scalability, and performance.
Ok, so what happens when we apply this conceptual model to data and applications? What if we were to become a consumer of services in these domains as well? The good news is that this journey is already underway, the question is how far we should take things in the interest of optimizing the value of IT within an organization.
The Path for Data and Analytics
In the case of data, I think about this area in two primary dimensions:
How we store, manage, and expose data
How we apply capabilities to that data and consume it
In terms of storage, the shift from hosted data to cloud-based solutions is already underway in many organizations. The key levers continue to be ensuring data quality and governance, finding ways to minimize data movement and optimize data sharing (while facilitating near real-time analytics), and establishing means to expose data in standard ways (e.g., virtualization) that enable downstream analytic capabilities and consumption methods to scale and work consistently across an enterprise. Certainly, the cost of ingress and egress of data across environments is a key consideration, especially where SaaS/PaaS solutions are concerned. Another opportunity continues to be the money wasted on building data lakes (beyond archival and unstructured data needs) when viable platform solutions in that space are available. From my perspective, the less time and resources spent on moving and storing data to no business benefit, the more energy that can be applied to exposing, analyzing, and consuming that data in ways that create actual value. Simply said, we don’t create value in how or where we store data, we create value in how consume it.
On the consumption side, having a standards-based environment with a consistent method for exposing data and enabling integration will lend itself well to tapping into the ever-expanding range of analytical tools on the market, as well as swapping out one technology for another as those tools continue to evolve and advance in their capabilities over time. The other major pivot being to minimize the amount of “traditional” analytical reporting and business intelligence solutions to more dynamic data apps that leverage AI to inform meaningful end-user actions, whether that’s for internal or external users of systems. Compliance-related needs aside, at an overall level, the primary goal of analytics should be informed action, not administrivia.
The Shift In Applications
The challenge in the applications environment is arbitrating the balance between monolithic (“all in”) solutions, like ERPs, and a fully distributed component-based environment that requires potentially significant management and coordination from an IT standpoint.
Conceptually, for smaller organizations, where the core applications (like an ERP suite + CRM solution) represent the majority of the overall footprint and there aren’t a significant number of specialized applications that must interoperate with them, it likely would be appropriate and effective to standardize based on those solutions, their data model, and integration technologies.
On the other hand, the more diverse and complex the underlying footprint is for a medium- to large-size organization, there is value in looking at ways to decompose these relatively monolithic environments to provide interoperability across solutions, enable rapid integration of new capabilities into a best-of-breed ecosystem, and facilitate analytics that span multiple platforms in ways that would be difficult, costly, or impossible to do within any one or two given solutions. What that translates to, in my mind, is an eventual decline of the monolithic ERP-centric environment to more of a service-driven ecosystem where individually configured capabilities are orchestrated through data and integration standards with components provided by various producers in the market. That doesn’t necessarily align to the product strategies of individual companies trying to grow through complementary vertical or horizontal solutions, but I would argue those products should create value at an individual component level and be configurable such that swapping out one component of a larger ecosystem should still be feasible without having to abandon the other products in that application suite (that may individually be best-of-breed) as well.
Whether shifting from a highly insourced to a highly outsourced/consumption-based model for data and applications will be feasible remains to be seen, but there was certainly a time not that long ago when hosting a substantial portion of an organization’s infrastructure footprint in the public cloud was a cultural challenge. Moving up the technology stack from the infrastructure layer to data and applications seems like a logical extension of that mindset, placing emphasis on capabilities provided and value delivered versus assets created over time.
Defining Critical Capabilities
Own Only What is Essential
Making an argument to shift to a consumption-oriented mindset in technology doesn’t mean there isn’t value in “owning” anything, rather it’s meant to be a call to evaluate and challenge assumptions related to where IT creates differentiated value and to apply our energies towards those things. What can be leveraged, configured, and orchestrated, I would buy and use. What should be built? Capabilities that are truly unique, create competitive advantage, can’t be sourced in the market overall, and that create a unified experience for end users. On the final point, I believe that shifting to a disaggregated applications environment could create complexity for end users in navigating end-to-end processes in intuitive ways, especially to the degree that data apps and integrated intelligence becomes a common way of working. To that end, building end user experiences that can leverage underlying capabilities provided by third parties feels like a thoughtful balance between a largely outsourced application environment and a highly effective and productive individual consumer of technology.
Recognize Orchestration is King
Workflow and business process management is not a new concept in the integration space, but it’s been elusive (in my experience) for many years for a number of reasons. What is clear at this point is that, with the rapid expansion in technology capabilities continuing to hit the market, our ability to synthesize a connected ecosystem that blends these unique technologies with existing core systems is critical. The more we can do this in consistent ways, the more we shift towards a configurable and dynamic environment that is framework-driven, the more business flexibility and agility we will provide… and that translates to innovation and competitive advantage over time. Orchestration is a critical piece of deciding which processes are critical enough that they shouldn’t be relegated to the internal workings of a platform solution or ERP, but taken in-house, mapped out, and coordinated with the intention of creating differentiated value that can be measured, evaluated, and optimized over time. Clearly the scalability and performance of this component is critical, especially to the degree there is a significant amount of activity being managed through this infrastructure, but I believe the transparency, agility, and control afforded in this kind of environment would greatly outweigh the complexity involved in its implementation.
Put Integration in the Center
In a service-driven environment, clearly the infrastructure for integration, streaming in particular, along with enabling a publish and subscribe model for event-driven processing, will be critical for high-priority enterprise transactions. The challenge in integration conversations in my experience tends to be defining the transactions that “matter”, in terms of facilitating interoperability and reuse, and those that are suitable for point-to-point, one off connections. There is ultimately a cost for reuse when you try to scale, and there is discipline needed to arbitrate those decisions to ensure they are appropriate to business needs.
Reassess Your Applications/Services
With any medium to large organization, there is likely technology sprawl to be addressed, particularly if there is a material level of custom development (because component boundaries likely won’t be well architected) and acquired technology (because of the duplication it can cause in solutions and instances of solutions) in the landscape. Another complicating factor could be the diversity of technologies and architectures in place, depending on whether or not a disciplined modernization effort exists, the level of architecture governance in place, and rate and means by which new technologies are introduced into the environment. All of these factors call for a thoughtful portfolio strategy, to identify critical business capabilities and ensure the technology solutions meant to enable them are modern, configurable, rationalized, and integrated effectively from an enterprise perspective.
Leverage Data and Insights, Then Optimize
With analytics and insights being a critical capability to differentiated business performance, an effective data governance program with business stewardship, selecting the right core, standard data sets to enable purposeful, actionable analytics, and process performance data associated with orchestrated workflows are critical components of any future IT infrastructure. This is not all data, it’s the subset that creates significant business value to justify the investment in making it actionable. As process performance data is gathered through the orchestration approach, analytics can be performed to look for opportunities to evolve processes, configurations, rules, and other characteristics of the environment based on key business metrics to improve performance over time.
Monitor and Manage
With the expansion of technologies and components, internal and external to the enterprise environment, having the ability to monitor and detect issues, proactively take action, and mitigate performance, security, or availability issues will become increasingly important. Today’s tools are too fragmented and siloed to achieve the level of holistic understanding that is needed between hosted and cloud-based environments, including internal and external security threats in the process.
Secure “Everything”
While zero trust and vulnerability management risk is expanding at a rate that exceeds an organization’s ability to mitigate it, treating security as a fundamental requirement of current and future IT environments is a given. The development of a purposeful cyber strategy, prioritizing areas for tooling and governance effectively, and continuing to evolve and adapt that infrastructure will be core to the DNA of operating successfully in any organization. Security is not a nice to have, it’s a requirement.
The Role of Standards and Governance
What makes the framework-driven environment of the future work is ultimately having meaningful standards and governance, particularly for data and integration, but extending into application and data architecture, along with how those environments are constructed and layered to facilitate evolution and change over time. Excellence takes discipline and, while that may require some additional investment in cost and time during the initial and ongoing stages of delivery, it will easily pay itself off in business agility, operating cost/ cost of ownership, and risk/exposure to cyber incidents over time.
The Lending Example
Having spent time a number of years ago understanding and developing strategy in the consumer lending domain, the similarities in process between direct and indirect lending, prime and specialty / sub-prime, from simple products like credit card to more complex ones like mortgage is difficult to ignore. That being said, it isn’t unusual for systems to exist in a fairly siloed manner, from application to booking, from document preparation, into the servicing process itself.
What’s interesting, from my perspective, is where the differentiation actually exists across these product sets: in the rules and workflow being applied across them, while the underlying functions themselves are relatively the same. As an example, one thing that differentiates a lender is their risk management policy, not necessarily the tool they use to assess to implement their underwriting rules or scoring models per se. Similarly, whether pulling a credit score is part of the front end of the process in something like credit card and an intermediate step in education lending, having a configurable workflow engine could enable origination across a diverse product set with essentially the same back-end capabilities and likely at a lower operating cost.
So why does it matter? Well, to the degree that the focus shifts from developing core components that implement relatively commoditized capability to the rules and processes that enable various products to be delivered to end consumers, the speed with which products can be developed, enhanced, modified, and deployed should be significantly improved.
Ok, Sounds Great, But Now What?
It Starts with Culture
At the end of the day, even the best designed solutions come down to culture. As I mentioned above, excellence takes discipline and, at times, patience and thoughtfulness that seems to contradict the speed with which we want to operate from a technology (and business) standpoint. That being said, given the challenges that ultimately arise when you operate without the right standards, discipline, and governance, the outcome is well worth the associated investments. This is why I placed courageous leadership as the first pillar in the five dimensions outlined in my article on Excellence by Design. Leadership is critical and, without it, everything else becomes much more difficult to accomplish.
Exploring the Right Operating Model
Once a strategy is established to define the desired future state and a culture to promote change and evolution is in place, looking at how to organize around managing that change is worth consideration. I don’t necessarily believe in “all in” operating approaches, whether it is a plan/build/run, product-based orientation, or some other relatively established model. I do believe that, given leadership and adaptability are critically needed for transformational change, looking at how the organization is aligned to maintaining and operating the legacy environment versus enabling establishment and transition to the future environment is something to explore. As an example, rather than assuming a pure product-based orientation, which could mushroom into a bloated organization design where not all leaders are well suited to manage change effectively, I’d consider organizing around a defined set of “transformation teams” that operate in a product-oriented/iterative model, but basically take on the scope of pieces of the technology environment, re-orient, optimize, modernize, and align them to the future operating model, then transition those working assets to different leaders that maintain or manage those solutions in the interest of moving to the next set of transformation targets. This should be done in concert with looking for ways to establish “common components” teams (where infrastructure like cloud platform enablement can be a component as well) that are driven to produce core, reusable services or assets that can be consumed in the interest of ultimately accelerating delivery and enabling wider adoption of the future operating model for IT.
Managing Transition
One of the consistent challenges with any kind of transformative change is moving between what is likely a very diverse, heterogenous environment to one that is standards-based, governed, and relatively optimized. While it’s tempting to take on too much scope and ultimately undermine the aspirations of change, I believe there is a balance to be struck in defining and establishing some core delivery capabilities that are part of the future infrastructure, but incrementally migrating individual capabilities into that future environment over time. This is another case where disciplined operations and disciplined delivery come into play so that changes are delivered consistently but also in a way that is sustainable and consistent with the desired future state.
Wrapping Up
While a certain level of evolution is guaranteed as part of working in technology, the primary question is whether we will define and shape that future or be continually reacting and responding to it. My belief is that we can, through a level of thoughtful planning and strategy, influence and shape the future environment to be one that enables rapid evolution as well as accelerated integration of best-of-breed capabilities at a pace and scale that is difficult to deliver today. Whether we’ll truly move to a full producer/consumer type environment that is service based, standardized, governed, orchestrated, fully secured, and optimized is unlikely, but falling short of excellence as an aspiration would still leave us in a considerably better place than where we are today… and it’s a journey worth making in my opinion.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.