InPractice: PMOs and Governance

What It Is: Governance (in a delivery context) is the process for reviewing a portfolio of ongoing work in the interest of enabling risk management, containing avoidable cost, and increasing on-time delivery.  PMOs (Project/Program Management Offices) are the mechanism through which governance is typically implemented in many organizations.Why It Matters: As the volume, risk, and complexity in a portfolio increase, there is typically a disproportionate increase in issues that come about, leading to cost overruns, missed expectations on scope or schedule (or both), and reduced productivity.  PMOs, meant to be a mechanism to mitigate these issues, are often set up or executed poorly, becoming largely administrative and not value-generating capabilities, which furthers or amplifies any underlying execution issues.  In an environment where organizations want to transform while managing a high level of value/cost efficiency, a disciplined and effective governance environment is critical to promoting IT excellence

Key Concepts

  • There are countless ways to set up an operating model in relation to PMOs and governance, but the culture and intent have to be right, or the rest of what follows will be more difficult
  • There is a significant difference between a “governing” and an “enabling” PMO in how people perceive the capability itself. While PMOs are meant to accomplish both ends, the priority is enabling successful, on-time, quality delivery, not establishing a “police state”
  • Where the focus of a PMO becomes “governance” that doesn’t drive engagement and risk management it can easily become an administrative entity that drives cost and doesn’t create value and ultimately undermines the credibility of the work as a whole
  • The structure of the overall operating model should align to the portfolio of work, scale of the organization, and alignment of customers to ongoing projects and programs
  • It can easily be the case that the execution of the governance model can adapt and change year-over-year but, if designed properly, the structure and infrastructure should be leverageable, regardless of those adjustments
  • The remainder of this article with introduce a concept for how to think about portfolio composition and then various dimensions to consider in creating an operating model for governance

Framing the Portfolio

In chalking out an approach to this article, I had to consider how to frame the problem in a way that could account for the different ways that IT portfolios are constructed.  Certainly, the makeup of work in a small- to medium-size organization is vastly different than a global, diversified organization.  It would also be different when there are a large number of “enterprise” projects versus a set of highly siloed, customer-specific efforts.  To that end, I’m going to introduce a way of thinking about the types of projects that typically make up an IT project portfolio, then an example governance model, the dimensions of which will be discussed in the next section.

The above graphic provides a conceptual way to organize delivery efforts, using the rocks, pebbles, and sand in a jar metaphor that is relatively well known, and also happens to apply to organizing technology delivery.

To establish effective governance, you generally first want to examine and classify delivery projects/programs based on scale (in effort and budget), risk, timeframe, and so on.  This is important so as not to apply a “one size fits all” approach to how you track and govern projects that encumbers lower complexity efforts with the same level of reporting that you would typically have on larger-scale, transformation programs.

In the model above, I went with a simple structure of four project types:

  • Sand – very low risk projects that can be something like a rate change in Insurance or data change in analytics
  • Pebbles – medium complexity work like incremental enhancements or an Agile sprint
  • Rocks – something material, like a package implementation, new technology introduction, product upgrade, or new business or technology capability delivery
  • Boulders – high complexity, multi-year transformation programs, like an ERP implementation where there are multiple material, related projects under one larger delivery umbrella

The characteristics of these projects and metrics you would ideally like to gather, along with the level of “review” needed on an ongoing basis would vary greatly, which will be explored in the next section. 

In a real-world scenario, it is possible that you might want to identify additional sub-categories to the degree it helps inform architecture or delivery governance processes (e.g., security, compliance, modernization, AI-related projects), most of which would likely be specialized kinds of “Pebbles” and “Rocks” in the above model.  It is very easy to become bloated quickly in terms of a governance process, so I am generally a proponent of tuning the model to the work and asking only questions relevant to the type of project being discussed.

What about Agile/SAFe and Product team-oriented environments?  In my experience, it is beneficial to segment delivery efforts because, even in product-based environments, there are normally a mix of projects that are more monolithic in nature (i.e., that would align to “Rocks” and “Boulders”).  Sprints within iterative projects (for a given product team) would likely align to “Pebbles” in the above model and the question would be how to align the outcome of retrospectives into the overall governance model, which will be addressed below.

So, coming back to the diagram, for the purposes of illustration, the assumption we will use is that the portfolio we’re supporting is a mix of all four project types (the “Portfolio Makeup” at right above), so that we can discuss how the governance can be layered and integrated across the different categories expressed in the model itself.

For the remainder of this article, we will assume the work in the delivery portfolio is divided equally between two business customer groups (A and B), with delivery teams supporting each as represented in the below diagram.

If your individual scenario involved a common customer, the model below could be simplified to one branch of the two represented.  If there were multiple groups, it could be scaled horizontally (adding branches for each additional organization) or if there were multiple groups across various geographies, it could be scaled by replicating and sizing the entire structure by entity (e.g., work organized by country in a global organization or by operating company in a conglomerate with multiple OpCos) and then adding one additional later for enterprise or global governance.

Key Dimensions

There are many dimensions to consider in establishing an enterprise delivery governance model.  The following breakdown is not intended to be exhaustive, but rather to highlight some key concepts that I believe are important to consider when designing the operating model for an IT organization.

General Design Principles

  • The goal is to enable decisions as close to the delivery as possible to improve efficiency and minimize the amount of “intervention” needed, unless it is a matter of securing additional resources (labor, funding, etc.) or addressing change control issues
  • The model should leverage a common operating infrastructure to the extent possible, to enable transparency and benchmarking across projects and portfolios. The more consistency and the more “plug and play” the infrastructure for monitoring and governance is, the faster (and more cost-effectively) projects and programs can typically be kicked off and accelerated into execution without having to define these processes independently
  • Metrics should move from summarized to more detailed as you move from oversight to execution, but the ability to “’drill down” should ideally be supported, so there is traceability

Business and IT PMOs versus “One” consolidated model

  • There is a proverbial question as to whether it is better to have “one”, integrated PMO construct, or an IT PMO separate from one that manages business dependencies (whether centralized or distributed)
  • From my perspective, this is a matter of scale and complexity. For smaller organizations, it may be efficient and practical to run everything through the same process, but as work scales, my inclination would be to separate concerns to keep the process from becoming too cumbersome and leverage the issue and risk management infrastructure to track and manage items relevant to the technology aspects of delivery.  There should be linkage and coordination to the extent that parallel organizations exist, but I would generally operate them independently so they can focus on their scope of concerns and be as effective as possible

Portfolio Management Integration

  • I’m assuming that portfolio management processes would operate “upstream” of the governance process and inform which projects are being slotted, address overall resource management and utilization, and release strategy
  • To the extent that change control in the course of delivery affects a planned release, a reverse dependency exists from the governance process back to the portfolio management process to see if schedule changes necessitate any bumping or reprioritization because of resource contention or deployment issues

IT Operations Integration

  • The infrastructure used to track and monitor delivery should come via the IT Operations capability, theoretically connecting at the IT Scorecard for executive level delivery metrics to portfolio and project metrics tracked at the execution level
  • IT Operations should own (or minimally help establish) the standards for reporting across the entire operating model

Participation

  • IT Operations should facilitate centralized governance processes as represented in the “Unit-level” and “Enterprise” governance processes in the diagram above. Program-level governance for “Boulders” would likely be best run by the delivery leadership accountable for those efforts
  • Participation should include whoever is needed to engage and resolve 80% (anecdotally) of the issues and risks that could be raised, but be limited to only people who need to be there
  • Governance processes should never be a “visibility” or “me too” exercise, they are a risk management and issue resolution activity, meant to drive engagement and support for delivery. Notes and decisions can and should be distributed to a broader audience as appropriate so additional stakeholders are informed
  • In the context of a RACI model (Responsible, Accountable, Consulted, Informed), meetings should include only “R” and “A” parties, who can reach out to extended stakeholders as needed (“C”), but very rarely anyone who would be defined as an “I” only
  • It is very easy to either overload a meeting to the point it becomes ineffective or not include the right participants to the extent it doesn’t accomplish anything, so this is a critical consideration for making a governance model effective

Session Scope and Scheduling

  • I’ve already addressed participation, but scheduling should consider the pace and criticality of interventions. Said differently, a frequent, recurring process may make sense when there is a significant volume of work, but something more episodic if there are a limited number of major milestones over the course of time where it makes sense to review progress and check in at specific points in time
  • Where an ongoing process is intended, “Boulders” and “Rocks” should have a standing spot on the agenda given the criticality and risk profiles of those efforts likely would be high. For “Pebbles”, some form of rotational involvement might make sense, such as including two of the four projects in the example above in every other meeting, or prioritizing any projects that are showing a Yellow or Red overall project health.  In the case of the “Sand”, those projects likely are so low risk that, beyond reporting some very basic operating metrics, they should only be included in a governance process when there is an issue that requires intervention or a schedule change that involves potential downstream impacts

Governance Processes

  • I mentioned this in concert with the example portfolio structure above, but it is important to consider tailoring the governance approach to the type of work so as not to create a cumbersome or bureaucratic environment for delivery teams where they focus on reporting and not managing and delivering their work
  • Compliance and security projects, as an example, are different than AI, modernization, or other types of efforts and should be reviewed with that in mind. To the extent a team is asked to provide a set of information as input to a governance process that doesn’t align cleanly to what they are doing, it becomes a distraction that creates no value.  That being said, there should be some core indicators and metrics that are collected regardless of the project type and reviewed consistently (as will be discussed in the next dimension)
  • The process should be designed and managed by IT Operations so it can be leveraged across an organization. While individual nuances can be applied that are specific to a particular delivery organization, it is important to have consistency to enable enterprise-level benchmarking and avoid the potential biases that can come from teams defining their own standards that could limit transparency and hinder effective risk management

Delivery Health and Metrics

  • I’ve written separately on Health and Transparency, but minimally every project should maintain a Red, Yellow, Green on Overall Health, and a second-level indicator on Schedule, Scope, Cost, Quality, and Resourcing that a project/program manager could supply very easily on an ongoing basis. That data should be collected at a defined interval to enable monitoring and inform governance processes on an ongoing basis, regardless of other quantitative metrics gather
  • Metrics on financials, resourcing, quality, issues, risks, and schedule can vary, but to the extent they can be drawn automatically from defined system(s) of record (e.g., MS Project, financial systems, a time tracking system with defined project coding, defect or incident management tools), the level of manual intervention required to enable governance should ideally be limited to data teams should be utilizing on an ongoing basis
  • In the event that there are multiple systems in place to track ongoing work, the IT Operations team should work with the delivery stakeholders to identify any enterprise-levels standards required to normalize them for reporting and governance purposes. To give a specific example, I encountered a situation once where there were five different defect management systems in place across a highly diversified IT organization.  In that case, the team developed a standard definition of how defects would be tracked and reported and the individual systems of record were mapped to that definition so that reporting was consistent across the organization

Change Control

  • Change is a critical area to monitor in any governance process because of the potential impact it has to resource consumption (labor, financials), customer delivery commitments, and schedule conflicts with other initiatives
  • Ideally a governance process should have the right information available to understand the implications of change as and when it is being reviewed as well as the right stakeholders present to make decisions with that information having been provided
  • To the extent that schedule, financial, or resource considerations change, information would need to be sent back to the IT Portfolio Management process to remedy any potential issues or disruptions that have been caused through decisions made. This is consistently missed in my experience in large delivery portfolios

Issue and Risk Management

  • Leveraging a common issue and risk management infrastructure both promotes a consistent way to track and report on these things across delivery efforts, but also creates a repository of “learnings” that could be reviewed and harvested in the interest of evaluating the efficacy of different approaches taken for similar issues/risks and promoting delivery health over time

Dependency/Integrated Plan Management

  • There are two dimensions to consider when it comes to dependencies. First is whether they exist within a project/program or are a dependency from that effort to others in the portfolio or downstream of it.  Second is whether the dependency is during the course of effort or connected to the delivery/deployment of the project
  • In my experience, teams are very good at covering project- or program-driven dependencies, but there can be major gaps in looking across delivery efforts to account for risks caused when things change. To that end, some level of dependency-related matrix should exist to identify and track dependencies across delivery efforts separate from a release calendar that focuses solely on deployment and “T-minus” milestones as projects near deployment
  • Once these dependencies are being tracked, changes that surface through the governance process can be escalated back to the IT Portfolio Management process and other delivery teams to understand and coordinate any adjustments required
  • This can include situations where there are sequential dependencies, as an example, where a schedule overrun requires additional resource commitment from a critical resource needed to kick off or participate in another delivery effort. Without a means to identify these dependencies, the downstream effort may be delayed or not have time to explore alternate resourcing options without having a ripple effect to that downstream delivery.  This is part of the argument for leveraging named resource planning (versus exclusively FTE-/role-based) for critical resources when slotting during the portfolio management process

Partner/Vendor Management

  • The IT Operations function should ideally help ensure that partners leverage internal reporting mechanisms or minimally conform to reporting standards and plug into existing governance processes where appropriate to do so
  • In the case of “Rocks” and “Boulders” that are largely partner-driven they likely will have a standalone governance process that leverages whatever process the partner has in place, but the goal should be to integrate and leverage whatever enterprise tools and standards are in place so that work can be benchmarked across delivery partners and also to compare the service delivery to internally-led efforts as well
  • It is very tempting to treat sourced work differently than projects delivered internal to IT, but who delivers a project should be secondary to whether the project is delivered on time, with quality and meets its objectives. The standards of excellence should apply regardless of who does the work

Learnings and Best Practices

  • Part of the potential benefit for having a shared infrastructure for executing governance discussions by comparison with distributing the work is that it enables you to see patterns in delivery, consistent bottlenecks, risks, and delays, and to leverage those learnings over time to improve delivery quality and predictability
  • Part of the governance process itself can also include having teams provide a post-mortem on their delivery efforts upon completion (successful or otherwise) so that other teams that participate in the governance process and the broader governance team can leverage those insights as appropriate

Change Management

  • While change management isn’t an explicit focus of a PMO/governance model, the dependency management surrounding deployment and learnings coming from various deployments should be coordinated with larger change management efforts and inform them on an ongoing basis in the interest of promoting more effective integration of new capabilities

Some Notes on Product Teams/Agile/SAFe Integration

  • It is tempting to treat product teams as isolated, independent, and discrete pieces of delivery. The issue with moving fully to that concept is that it becomes easy to lose transparency and benchmarking across delivery efforts that surface opportunities to more effectively manage risks and issues outside a given product/delivery team
  • To that end, part of the design process for the overall governance model should look at how to leverage and/or integrate the tooling for Agile projects with other enterprise project tracking tools as needed, along with integrating learnings from retrospectives with overall delivery improvement processes

Wrapping Up

Overall, there are many considerations that go into establishing an operating model for PMOs and delivery governance at end enterprise level.  The most important takeaway is to be deliberate and intentional about what you put in place, keep it light, do everything you can to leverage data that is already available, and keep the balance between the project and the portfolio in mind at all times.  The more project-centric you become, the more likely you will end up siloed and inefficient overall, and that will translate into missed dates, increased costs, and wasted utilization.

For Additional Information: On Health and Transparency, On Delivering at Speed, Making Governance Work, InBrief: IT Operations

Excellence doesn’t happen by accident.  Courageous leadership is essential.

Put value creation first, be disciplined, but nimble.

Want to discuss more?  Please send me a message.  I’m happy to explore with you.

-CJG 12/19/2025

On “Delivering at Speed”

Context

In my article on Excellence by Design, the fifth dimension I reference is “Delivering at Speed”, basically understanding the delicate balance to be struck in technology with creating value on a predictable, regular basis while still ensuring quality.

One thing that is true is software development is messy and not for the faint of heart or risk averse.  The dynamics of a project, especially if you are doing something innovative, tend to be in flux at a level that requires you to adapt on the fly, make difficult decisions, accept tradeoffs, and abandon the idea that “perfection” even exists.  You can certainly try to design and build out an ivory tower concept, but the probability is that you’ll never deliver it or, in the event you do, that it will take you so long to make it happen that your solution will be obsolete by the time you finally go live.

To that end, this article is meant to share a set of delivery stories from my past.  In three cases, I was told the projects were “impossible” or couldn’t be delivered at the outset.  In the other two, the level of complexity or size of the challenge was relatively similar, though they weren’t necessarily labeled “impossible” at any point.  In all cases, we delivered the work.  What will follow, in each case, are the challenges we faced, what we did to address them, and what I would do differently if the situation happened again.  Even in success, there is ample opportunity to learn and improve.

Looking across these experiences, there is a set of things I would say apply in nearly all cases:

  1. Commitment to Success
    • It has to start here, especially with high velocity or complex projects. When you decide you’re going to deliver day one, every obstacle is a problem you solve, and not a reason to quit.  Said differently, if you are continually looking for reasons to fail, you will.
  2. Adaptive Leadership
    • Change is part of delivering technology solutions. Courageous leadership that embraces humility, accepts adversity, and adapts to changing conditions will be successful far more than situations where you hold onto original assumptions beyond their usefulness
  3. Business/Technology Collaboration
    • Effective communication, a joint investment in success, and partnership make a significant difference in software delivery. The relationships and trust it takes can be an achievement in itself, but the quality of the solution and ability to deliver is definitely stronger where this is in place
  4. Timely Decision Making
    • As I discuss in my article On Project Health and Transparency, there are “points of inflection” that occur on projects of any scale. Your ability to respond, pivot, and execute in a new direction can be a critical determinant to delivering
  5. Allowing for the Unknown
    • With any project of scale or reasonable complexity, there will be pure “unknown” (by comparison with “known unknown”) that is part of the product scope, the project scope, or both. While there is always a desire to deliver solutions as quickly as possible with the lowest level of effort (discussed in my article Fast and Cheap Isn’t Good), including some effort or schedule time proactively as contingency for the unknown is always a good idea
  6. Excessive Complexity
    • One thing that is common across most of these situations is that the approach to the solution was building a level of flexibility or capability that probably was beyond what was needed in practice. This is not unusual on new development, especially where significant funding is involved, because those bells and whistles create part of the appeal that justifies the investment to begin with.  That being said, if a crawl-walk-run type approach that evolves to a more feature-rich solution is possible, the risk profile for the initial efforts (and associated cost) will likely be reduced substantially.  Said differently, you can’t generate return on investment for a solution you never deliver

The remainder of this article is focused on sharing a set of these delivery stories.  I’ve purposefully reordered and made them a bit abstract in the interest of maintaining a level of confidentiality on the original efforts involved.  In practice, these kinds of things happen on projects all the time, the techniques referenced are applicable in many situations, and the specifics aren’t as important.

 

Delivering the “Impossible”

Setting the Stage

I’ll always remember how this project started. I had finished up my previous assignment and was wondering what was next.  My manager called me in and told me about a delivery project I was going to lead, that it was for a global “shrink wrap” type solution, that a prototype had been developed, that I needed to design and build the solution… but not to be too concerned because the timeframe was too short, the customer was historically very difficult to work with, and there was “no way to deliver the project on time”.  Definitely not a great moment for motivating and inspiring an employee, but my manager was probably trying to manage my expectations given the size of the challenge ahead.

Some challenges associated with this effort:

  1. The nature of what was being done, in terms of automating an entirely manual process had never been done before. As such, the requirements didn’t exist at the outset, beyond a rudimentary conceptual prototype that demonstrated the desired user interface behavior
  2. I was completely unfamiliar with the technology used for the prototype and needed to immediately assess whether to continue forward with it or migrate into a technology I knew
  3. The timeframe was very aggressive and the customer was notorious for not delivering on time
  4. We needed everything we developed to fit on a single 3.5-inch diskette for distribution, and it was not a small application to develop

What Worked

Regardless of any of the mechanics, having been through a failed delivery early in my career, my immediate reaction to hearing the project was doomed from the outset was that there was no way we were going to allow that to happen.

Things that mattered in the delivery:

  1. Within the first two weeks, I both learned the technology that was used for the prototype and was able to rewrite it (it was a “smoke and mirrors” prototype) into a working, functional application. Knowing that the underlying technology could do what was needed in terms of the end user experience, we took the learning curve impact in the interest of reducing the development effort that would have been required to try and create a similar experience using other technologies we were using at the time
  2. Though we struggled initially, eventually we brought a leader from the customer team to our office to work alongside us (think Agile before Agile) so we could align requirements with our delivery iterations and produce application sections for testing in relatively complete form
  3. We had to develop everything as compact as possible given the single disk requirement so the distribution costs wouldn’t escalate (10s of thousands of disks were being shipped globally)

All of the above having helped, the thing that made the largest difference was the commitment of the team (ultimately me and three others) to do whatever was required to deliver.  The brute force involved was substantial, and we worked increasing hours, week-after-week, until we pulled seven all night sessions in the last ten days leading up to shipping the software to the production company.  It was an exceptionally difficult pace to sustain, but we hit the date, and the “impossible” was made possible.

What I Would Change

While there is a great deal of satisfaction that comes from meeting a delivery objective, especially an aggressive one, there are a number of things I would have wanted to do differently in retrospect:

  1. We grew the team over time in a way that created additional pressure in the latter half of the project. Given we started with no requirements and were doing something that had never been done, I’m not sure how we could have estimated the delivery to know we needed more help sooner, but minimally, from a risk standpoint, there was too much work spread too thinly for too long, and it made things very challenging later on to catch up
  2. As I mentioned above, we eventually transitioned to have an integrated business/technology team that delivered the application with tight collaboration. This should have happened sooner, but we collectively waited until it became a critical issue before it escalated to a level than anyone really addressed it.  That came when we actually ran out of requirements late one night (it was around 2am) to the point that we needed to stop development altogether.  The friction this created between the customer and development team was difficult to work through and is something the change in approach made much better, just too late in the project
  3. From a software standpoint, given it was everyone on the team’s first foray into new technology (starting with me), there was a lot we could have done to design the solution better, but we were unfortunately learning on the job. This is another one that I don’t know how we could have offset, beyond bringing in an expert developer to help us work through the design and sanity check our work, but it was such new technology at the time, that I don’t know that was really a viable option or that such expertise was available
  4. This was also my first global software solution and I didn’t appreciate the complexities of localization enough to avoid making some very basic mistakes that showed up late in the delivery process.

 

Working Outside the Service Lines

Setting the Stage

This project honestly was sort of an “accidental delivery”, in that there was no intention from a services standpoint to take on the work to begin with.  Similar to my product development experience, there was a customer need to both standardize and automate what was an entirely manual process.  Our role, and mine in particular, was to work with the customer, understand the current process across the different people performing it, then look for a way to both standardize the workflow itself (in a flexible enough way that everyone could be trained and follow that new process), then to define the opportunities to automate it such that a lot of the effort done in spreadsheets (prone to various errors and risks) could be built into an application that would make it much easier to perform the work.

The point of inflection came when we completed the process redesign and, with no implementation partner in place (and no familiarity with the target technologies in the customer team), the question became “who is going to design and build this solution?”  Having a limited window of time, a significant amount of seasonal business that needed to be processed, and the right level of delivery experience, I offered to shift from a business analyst to the technology lead on the project.  With a substantial challenge ahead, I was again told what we were trying to do could never be done in the time we had.  Having had previous success in that situation, I took it as a challenge to figure out what we needed to do to deliver.

Some challenges associated with this effort:

  1. The timeframe was the biggest challenge, given we had to design and develop the entire application from scratch. The business process was defined, but there was no user interface design, it was built using relatively new technology, and we needed to provide the flexibility users were used to having in Excel while still enforcing a new process
  2. Given the risk profile, the customer IT manager assumed the effort would fail and consequently provided only a limited amount of support and guidance until the very end of the project, which created some integration challenges with the existing IT infrastructure
  3. Finally, given that there were technology changes occurring in the market as a whole, we encountered a limitation in the tools (given the volume of data we were processing) that nearly caused us to hit a full stop mid-development

What Worked

Certainly, an advantage I had coming into the design and delivery effort was that I helped develop the new process and was familiar with all the assumptions we made during that phase of the work.  In that respect, the traditional disconnect between “requirements” and “solution” was fairly well mitigated and we could focus on how to design the interface, not the workflow or data required across the process.

Things that mattered in the delivery:

  1. One major thing that we did well from the outset was work in a prototype-driven approach, engaging with the end customer, sketching out pieces of the process, mocking them up, confirming the behavior, then moving onto the next set of steps while building the back end of the application offline. Given we only had a matter of months, the partnership with the key business customer and their investment in success made a significant difference in the efficiency of our delivery process (again, very Agile before Agile)
  2. Despite the lack of support from customer IT leadership standpoint, a key member of their team invested in the work, put in a tremendous amount of effort, and helped keep morale positive despite the extreme hours we worked for essentially the entire duration of the project
  3. While not as pleasant, another thing that contributed to our success was managing performance actively. Wanting external expertise (and needing the delivery capacity), we pulled in additional contracting help, but had inconsistent experience with the commitment level of the people we brought in.  Simply said: you can’t deliver high velocity project with a half-hearted commitment.  It doesn’t work.  The good news is that we didn’t delay decisions to pull people where the contributions weren’t where they needed to be
  4. On the technology challenges, when serious issues arose with our chosen platform, I took a fairly methodical approach to isolating and resolving the infrastructure issues we had. The result was a very surgical and tactical change to how we deployed the application without needing to do a more complex (and costly) end user upgrade that initially appeared to be our only option

What I Would Change

While the long hours and months without a day off ultimately enabled us to deliver the project, there were certainly learnings from this effort that I took away despite our overall success.

Things I would have wanted to do differently in retrospect:

  1. While the customer partnership was very effective overall, one area that where we didn’t engage early enough was with the customer analytics organization. Given the large volume of data, heavy reliance on computational models, and the capability for users to select data sets to include in the calculations being performed, we needed more support than expected to verify our forecasting capabilities were working as expected.  This was actually a gap in the upstream process design work itself, as we identified the desired capability (the “feature”) and where it would occur within the workflow, but didn’t flesh out the specific calculations (the “functionality”) that needed to be built to support it.  As a result, we had to work through those requirements during the development process itself, which was very challenging
  2. From a technology standpoint, we assumed a distributed approach for managing data associated with the application. While this reduced the data footprint for individual end users and simplified some of the development effort, it actually made the maintenance and overall analytics associated with the platform more complex.  Ultimately, we should have centralized the back end of the application.  This is something that was done subsequent to the initial deployment, though I’m not certain if we would have been able to take that approach with the initial release and still made the delivery date
  3. From a services standpoint, while I had the capability to lead the design and delivery of the application, the work itself was outside the core service offerings of our firm. Consequently, while we delivered for the customer, there wasn’t an ability to leverage the outcome for future work, which is important in consulting in building your business.  In retrospect, while I wouldn’t have learned and gotten the experience, we should have engaged a partner in the delivery and played a different role in implementation

 

Project Extension

Setting the Stage

Early in my experience of managing projects, I had the opportunity to take on an effort where the entire delivery team was coming off a very difficult, long project.  I was motivated and wanted to deliver, everyone else was pretty tired.  The guidance I received at the outset was not to expect very much, and that the road ahead was going to be very bumpy.

Some challenges associated with this effort:

  1. As I mentioned above, the largest challenge was a lack of motivation, which was a strength in other high pressure deliveries I’d encountered before. I was unused to dealing with it from a leadership standpoint and didn’t address it as effectively as I should have
  2. From a delivery standpoint, the technical solution was fairly complex, which made the work and testing process challenging, especially in the timeframe we had for the effort
  3. At a practical level, the team was larger than I had previous experience leading. Leading other leaders wasn’t something I had done before, which led me to making all the normal mistakes that comes with doing so for the first time, which didn’t help on either efficiency or sorely needed motivation

What Worked

While the project started with a team that was already burned out, the good news is that the base application was in place, the team understood the architecture, and the scope was to buildout existing capabilities on top of a reasonably strong foundation.  There was a substantial amount of work to be performed in a relatively short timeframe, but the good news is that we weren’t starting from scratch and there was recent evidence that the team could deliver.

Things that mattered in the delivery:

  1. The client partnership was strong, which helped both in addressing requirements gaps and, more importantly, in performing customer testing in both an efficient and effective manner given the accelerated timeframe
  2. At the outset of the effort, we revisited the detailed estimates and realigned the delivery team to balance the work more effectively across sub-teams. While this required some cross-training, we reduced overall risk in the process
  3. From a planning standpoint, we enlisted the team to try out an aggressive approach where we set all the milestones slightly ahead of their expected delivery date. Our assumption was that, by trying to beat our targets, we could create some forward momentum that would create “effort reserve” to use for unexpected issues and defects later in the project
  4. Given the pace of the work and size of the delivery team, we had the benefit of strong technical leads who helped keep the team focused and troubleshoot issues as and when we encountered them

What I Would Change

Like other projects I’m covering in this article, the team put in the effort to deliver on our commitments, but there were definitely learnings that came through the process.

Things I would have wanted to do differently in retrospect:

  1. Given it was my first time leading a larger team under a tight timeline, I pushed where I should have inspired. It was a learning experience that I’ve used for the benefit of others many times since.  While I don’t know what impact it might have had on the delivery itself, it might have made the experience of the journey better overall
  2. From a staffing standpoint, we consciously stuck to the team that helped deliver the initial project. Given the burnout was substantial and we needed to do a level of cross-training anyway, it might have been a good idea for us to infuse some outside talent to provide fresh perspective and much needed energy from a development standpoint
  3. Finally, while it was outside the scope of work itself, this project was an example of a situation I’ve encountered a few times over the years where the requirements of the solution and its desired capabilities were overstated and translated into a lot of complexity in architecture and design. My guess is that we built a lot of flexibility that wasn’t required in practice

 

Modernization Program

Setting the Stage

What I think of as another “impossible” delivery came with a large-scale program that started off with everyone but the sponsors assuming it would fail.

Some challenges associated with this effort:

  1. The largest thing stacked against us was two failed attempts to deliver the project in the past, with substantial costs associated with each. Our business partners were well aware of those failures, some having participated in them, and the engagement was tentative at best when we started to move into execution
  2. We also had significant delivery issues with our primary technology partner that resulted in them being transitioned out mid-implementation. Unfortunately, they didn’t handle the situation gracefully, escalated everywhere, told the CIO the project would never be successful, and the pressure on the team to hit the first release on schedule was increased by extension
  3. From an architecture standpoint, the decision was made to integrate new technology with existing legacy software wherever possible, which added substantial development complexity
  4. The scale and complexity for a custom development effort was very significant, replacing multiple systems with one new, integrated platform, and the resulting planning and coordination was challenging
  5. Given the solution replaced existing production systems, there was a major challenge in keeping capabilities in sync between the new application and ongoing enhancements being implemented in parallel by the legacy application delivery team

What Worked

Things that mattered in the delivery:

  1. As much as any specific decision or “change”, what contributed to the ultimate success of the program was our continuous evolution of the approach as we encountered challenges. With a program of the scale and complexity we were addressing, there was no reasonable way to mitigate the knowledge and requirements risks that existed at the outset.  What we did exceptionally well was to pivot and work through obstacles as they appeared… in architecture, requirements, configuration management, and other aspects of the work.  That adaptive leadership was critical in meeting our commitments and delivering the platform
  2. The decision to change delivery partners was a significant disruption mid-delivery that we managed with a weekly transition management process to surface and address risks and issues on an ongoing basis. The governance we applied was very tight across all the touchpoints into the program and it helped us ultimately onboard the new partner and reduce risk on the first delivery date, which we ultimately met
  3. To accelerate overall development across the program, we created both framework and common components teams, leveraging reuse to help reduce risk and effort required in each of the individual product teams. While there was some upfront coordination to decide how to arbitrate scope of work, we reduced the overall effort in the program substantially and could, in retrospect, have built even more “in common” than we did
  4. Finally, to keep the new development in sync with the current production solutions, we integrated the program with ongoing portfolio management processes from work-intake and estimation through delivery as if we were already in production. This helped us avoid rework that would have come if we had to retrofit those efforts post-development in the pre-production stage of the work

The net result of a lot of adjustments and a very strong, committed set of delivery teams was that we met our original committed launch date and moved into the broader deployment of the program.

What I Would Change

The learnings from a program of this scale could constitute an article all on their own, so I’ll focus on a subset that were substantial at an overall level.

Things I would have wanted to do differently in retrospect:

  1. As I mentioned in the point on common components above, the mix between platform and products wasn’t right. Our development leadership was drawn from the legacy systems, which helped, given they were familiar with the scope and requirements, but the downside was that the new platform ended up being siloed in a way that mimicked the legacy environment.  While we started to promote a culture of reuse, we could have done a lot more to reduce scope in the product solutions and leverage the underlying platform more
  2. Our product development approach should have been more framework-centric, being built towards broader requirements versus individual nuances and exceptions. There was a considerable amount of flexibility architected into the platform itself, but given the approach was focused on implementing every requirement as if it was an exception, the complexity and maintenance cost of the resulting platform was higher than it should have been
  3. From a transition standpoint, we should have replaced our initial provider earlier, but given the depth and nature of their relationships and a generally risk-averse mindset, we gave them a matter of months to fail, multiple times, before making the ultimate decision to change. Given there was a substantial difference in execution once we completed transition, we waited longer than we should have
  4. Given we were replacing multiple existing legacy solutions, there was a level of internal competition that was unhealthy and should have been managed more effectively from a leadership standpoint. The impact was that there were times the legacy teams were accelerating capabilities on systems we knew were going to be retired in what appeared to be an effort to undermine the new platform

 

Project Takeover

Setting the Stage

We had the opportunity in consulting to bid on a development project from our largest competitor that was stopped mid-implementation.  As part of the discovery process, we received sample code, testing status, and the defect log at the time the project was stopped.  We did our best to make conservative assumptions on what we were inheriting and the accuracy of what we received, understanding we were in a bidding situation and had to lean into discomfort and price the work accordingly.  In practice, the situation we took over was far worse than expected.

Some challenges associated with this effort:

  1. While the quality of solution was unknown at the outset, we were aware of a fairly high number of critical defects. Given the project didn’t complete testing and some defects were likely to be blocking the discovery of others, we decided to go with a conservative assumption that the resulting severe defect count could be 2x the set reported to us.  In practice the quality was far worse and there were 6x more critical defects than were reported to us at the bidding stage
  2. In concert with the previous point, while the testing results provided a mixed sense of progress, with some areas being in the “yellow” (suggesting a degree of stability) and others in the “red” (needing attention), in practice, the testing regimen itself was clearly not thorough and there wasn’t a single piece of the application that was better than a “red” status, most of them more accurately “purple” (restart from scratch), if such a condition even existed.
  3. Given the prior project was stopped, there was a very high level of visibility, and the expectation to pick up and use what was previously built was unrealistic to a large degree, given the quality of work was so poor
  4. Finally, there was considerable resource contention with the client testing team not being dedicated to the project and, consequently, it became very difficult to verify the solution as we progressed through stabilizing the application and completing development

What Worked

While the scale and difficulty of the effort was largely underrepresented at the outset of the work, as we dug in and started to understand the situation, we made adjustments that ultimately helped us stabilize and deliver the program.

Things that mattered in the delivery:

  1. Our challenges in testing aside, we had the benefit of a strong client partnership, particularly in program management and coordination, which helped given the high level of volatility we had in replanning as we progressed through the effort
  2. Given we were in a discovery process for the first half of the project, our tracking and reporting methods helped manage expectations and enable coordination as we continued to revise the approach and plan. One specific method we used was showing the fix rate in relation to the level of undiscovered defects and then mapping that additional effort directly to the adjusted plan.  When we visibly accounted for it in the schedule, it helped build confidence that we actually were “on plan” where we had good data and making consistent progress where we had taken on additional, unforeseen scope as well.  Those items were reasonably outside our control as a partner, so the transparency helped us given the visibility and pressure surrounding the work was very high
  3. Finally, we mitigated risk in various situations by making the decision to rewrite versus fix what was handed over at the outset of the project. The code quality being as poor as it was and requirements not being met, we had to evaluate whether it was easier to start over and work with a clean slate versus trying to reverse engineer something we knew didn’t work.  These decisions helped us reduce effort and risk, and ultimately deliver the program

What I Would Change

Things I would have wanted to do differently in retrospect:

  1. As is probably obvious, the largest learning was that we didn’t make conservative enough assumptions in what we were inheriting with the project, the accuracy of testing information provided, or the code samples being “representative” of the entire codebase. In practice, though, had we estimated the work properly and attached the actual cost for doing the project, we might not have “sold” the proposal either…
  2. We didn’t factor changing requirements into our original estimates properly, partially because we were told the project was mid-testing, and largely built prior to our involvement. This added volatility into the project as we already needed to stabilize the application without realizing the requirements weren’t frozen.  In retrospect, we should have done a better job probing on this during the bidding process itself
  3. Finally, we had challenges maintaining momentum where a dedicated client testing team would have made the iteration process more efficient. It may have been necessary to lean on augmentation or a partner to help balance ongoing business and the project, but the cost of extending the effort was substantial enough that it likely was worth investigating

 

Wrapping Up

As I said at the outset, having had the benefit of delivering a number of “impossible” projects over the course of my career, I’ve learned a lot about how to address the mess that software development can be in practice, even with disciplined leadership.  That being said, the great thing about having success is that it also tends to make you a lot more fearless the next time a challenge comes up, because you have an idea what it takes to succeed under adverse conditions.

I hope the stories were worth sharing.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 02/07/2023