The Seeds of Transformation

Introduction

I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth.” – John F Kennedy, May 25, 1961

When JFK made his famous pronouncement in 1961, the United States was losing in the space race.  The Soviet Union was visibly ahead, to the point that the government shuffled the deck, bringing together various agencies to form NASA, and set a target far out ahead of where anyone was focused at the time: landing on the Moon.  The context is important as the U.S. was not operating from a position of strength and JFK didn’t shoot for parity or to remain in a defensive posture. Instead, he leaned in and set an audacious goal that redefined the playing field entirely.

I spoke at a town hall fairly recently about “The Saturn V Story”, a documentary that covers the space race and journey leading to the Apollo 11 moon landing on July 20, 1969.  The scale and complexity of what accomplished in a relatively short timeframe was truly incredible and feels like a good way to introduce a Transformation discussion.  The Apollo program engaged 375,000 people at its peak, required extremely thoughtful planning and coordination (including the Mercury and Gemini programs that preceded it), and presented a significant number of engineering challenges that needed to be overcome to achieve its ultimate goal.  It’s an inspiring story, as any successful transformation effort should be.

The challenge is that true transformation is exceptionally difficult and many of these efforts fail or fall short of their stated objectives.  The remainder of this article will highlight some key dimensions that I believe are critical in increasing the probability of success.

Transformation is a requirement of remaining competitive in a global digital economy.  The disruptions (e.g., cloud computing, robotics, orchestration, artificial intelligence, cyber security exposure, quantum computing) have and will continue to occur, and success will be measured, in part, based on an organization’s ability to continuously transform, leveraging advanced capabilities to its’ maximum strategic benefit.

Successful Transformation

Culture versus Outcome

Before diving into the dimensions themselves, I want to emphasize the difference I see between changing culture and the kind of transformation I’m referencing in this article.  Culture is an important aspect to affecting change, as I will discuss in the context of the dimensions themselves, but a change in culture that doesn’t lead to a corresponding change in results is relatively meaningless.

To that end, I would argue that it is important to think about “change management” as a way to transition between the current and desired ways of working in a future state environment, but with specific, defined outcomes attached to the goal

It is insufficient, as an example, to express “we want to establish a more highly collaborative workplace that fosters innovation” without also being able to answer the questions: “To what end?” or “In the interest of accomplishing what?”  Arguably, it is the desired outcome that sets the stage for the nature of the culture that will be required, both to get to the stated goal as well as to operate effectively once those goals are achieved.  In my experience, this balance isn’t given enough thought when change efforts are initiated, and it’s important to make sure culture and desired outcomes are both clear and aligned with each other.

For more on the fundamental aspects of a healthy environment, please see my article on The Criticality of Culture.

What it Takes

Successful transformation efforts require focus on many levels and in various dimensions to manage what ultimately translates to risk.

The set that come to mind as most critical are having:

  • An audacious goal
    • Transformation is, in itself, a fundamental (not incremental) change in what an organization is able to accomplish
    • To the extent that substantial change is difficult, the value associated with the goal needs to outweigh the difficulties (and costs) that will be required to transition from where you are to where you need to be
    • If the goal also isn’t compelling enough, likely there won’t be the requisite level of individual and collective investment required to overcome the adversity that is typically part of these efforts. This is not just about having a business case.  It’s a reason for people to care… and that level of investment matters where transformation is the goal
  • Courageous, committed leadership
    • Change is, by its’ nature, difficult and disruptive. There will be friction and resistance that comes from altering the status quo
    • The requirements of leadership in these efforts tend to be very high, because of the adversity and risk that can be involved, and a degree of fearlessness and willingness to ride through the difficulties is important
    • Where this level of leadership isn’t present, it will become easy to focus on obstacles versus solutions and to avoid taking risks that lead to suboptimized results or overall failure of the effort. If it was easy to transform, everyone would be doing it all the time
    • It is worth noting that, in the case of the Apollo missions, JFK wasn’t there to see the program through, yet it survived both his passing and significant events like the Apollo fire without compromising the goal itself
    • A question to consider in this regard: Is the goal so compelling that, if the vision holder / sponsor were to leave, the effort would still move forward? There are many large-scale efforts I’ve seen over the years where a change in leadership affects the commitment to a strategy.  There may be valid reasons for this to be the case, but arguably both a worthy goal and strong leadership are necessary components in transformation overall
  • An aligned and supportive culture
    • There is a significant aspect of accomplishing a transformational agenda that places a burden on culture
    • On this point, the going-in position matters in the interest of mapping out the execution approach, because anything about the environment that isn’t conducive to facilitating and enabling collaboration and change will ultimately create friction that needs to be addressed and (hopefully) overcome
    • To the extent that the organization works in silos or that there is significant and potentially unhealthy internal competition within and across leaders, the implications of those conflicts need to be understood and mitigated early on (to the degree possible) so as to avoid what could lead to adverse impacts on the effort overall
    • As a leader said to me very early in my career, “There is room enough in success for everybody.” Defining success at an individual and collective level may be a worthwhile activity to consider depending on the nature of where an organization is when starting to pursue change
    • On this final point, I have been in the situation more than once professionally where a team worked to actively undermine transformation objectives because those efforts had an adverse impact to their broader role in an organization. This speaks, in part, to the importance of engaged, courageous leadership to bring teams into alignment, but where that leadership isn’t present, it definitely makes things more difficult.  Said differently, the more established the status quo is, the harder it may resist change
  • A thoughtful approach
    • “Rome was not built in a day” is probably the best way to summarize this point
    • Depending on the level of complexity and degree of change involved, the more thought and attention that needs to be paid to planning out the approach itself
    • The Apollo program is a great example of this, because there were countless interim stages in the development of the Saturn V rocket, creating a safe environment for manned space flight, procedures for rendezvous and docking of the spacecraft, etc.
    • In a technology delivery environment, these can be program increments in a scaled Agile environment, selective “pilots” or “proof-of-concept” efforts, or interim deliveries in a more component-based (and service-driven) architecture. The overall point being that it’s important to map out the evolution of current to future state, allowing for testing and staging of interim goals that help reduce risk on the ultimate objectives
    • In a different example, when establishing an architecture capability in a large, complex organization, we established an operating model to define roles and responsibilities, but then operationalized the model in layers to help facilitate change with defined outcomes spread across multiple years. This was done purposefully and deliberately in the interest of making the changes sustainable and to gradually shift delivery culture to be more strategically-aligned, disciplined, and less siloed in the process
  • Agility and adaptiveness
    • The more advanced and innovative the transformation effort is, the more likely it will be that there is a higher degree of unknown (and knowledge risk) associated with the effort
    • To that end, it is highly probable that the approach to execution will evolve over time as knowledge gaps are uncovered and limitations and constraints need to be addressed and overcome
    • There are countless examples of this in the Apollo program, one of the early ones being the abandonment of the “Nova” rocket design, which involved a massive vehicle that ultimately was eliminated in deference to the multi-stage rocket and lunar lander / command module approach. In this case, the means for arriving at and landing on the moon was completely different than it was at the program’s inception, but the outcome was ultimately the same
    • I spend some time discussing these “points of inflection” in my article On Project Health and Transparency, but the important concept is not to be too prescriptive when planning a transformation effort, because execution will definitely evolve
  • Patience and discipline
    • My underlying assumption is that the level of change involved in transformation is significant and, as such, it will take time to accomplish
    • The balance to be struck is ultimately in managing interim deliveries in relation to the overall goals of the effort. This is where patience and discipline matter, because it is always tempting to take short cuts in the interest of “speed to market” while compromising fundamental design elements that are important to overall quality and program-level objectives (something I address in Fast and Cheap, Isn’t Good)
    • This isn’t to say that tradeoffs can’t or shouldn’t be made, because they often are, but rather that these be conscious choices, done through a governance process, and with a full understanding of the implications of the decisions on the ultimate transformation objectives
  • A relentless focus on delivery
    • The final dimension is somewhat obvious, but is important to mention, because I’ve encountered transformative efforts in the past that spent so much energy either on structural or theoretical aspects to their “program design” that they actually failed to deliver anything
    • In the case of the Apollo program, part of what makes the story so compelling is the number of times the team needed to innovate to overcome issues that arose, particularly to various design and engineering challenges
    • Again, this is why courageous, committed leadership is so important to transformation. The work is difficult and messy and it’s not for the faint of heart.  Resilience and persistence are required to accomplish great things.

Wrapping Up

Hopefully this article has provided some areas to consider in either mapping out or evaluating the health of a transformational effort.  As I covered in my article On Delivering at Speed, there are always opportunities to improve, even when you deliver a complex or high-risk effort.  The point is to be disciplined and thoughtful in how you approach these efforts, so the bumps that inevitably occur are more manageable and the impact they have are minimized overall.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 12/29/2024

Bringing AI to the End User

Overview

In my recent article on Exploring Artificial Intelligence, I covered several dimensions of how I think about the direction of AI, including how models will evolve from general-purpose and broad-based to be more value-focused and end consumer-specific as the above diagram is intended to illustrate.

The purpose of this article is to dive a little deeper into a mental model for how I believe the technology could become more relevant and valuable in an end-user (or end consumer) specific context.

Before that, a few assertions related to the technology and end user application of the technology:

  • The more we can passively collect data in the interest of simplifying end-user tasks and informing models, the better. People can be both inconsistent and unreliable in how they capture data.  In reality, our cell phones are collecting massive amounts of data on an ongoing basis that is used to drive targeted advertising and other capabilities to us without our involvement.  In a business context, however, the concept of doing so can be met with significant privacy and other concerns and it’s a shame because, while there is data being collected on our devices regardless, we aren’t able to benefit from it in the context of doing our work
  • Moving from a broad- or persona-based means of delivering technology capabilities to a consumer-specific approach is a potentially significant advancement in enabling productivity and effectiveness. This would be difficult or impossible to achieve without leveraging an adaptive approach that synthesizes various technologies (personalization, customization, dynamic code generation, role-based access control, AI/ML models, LLMs, content management, and so forth) to create a more cohesive and personalized user experience
  • While I am largely focusing on end-user application of the technology, I would argue that the same concepts and approach could be leveraged for the next generation of intelligent devices and digital equipment, such as robotics in factory automation scenarios
  • To make the technology both performant and relevant, part of the design challenge is to continually reduce and refine to level of “model” information that is needed at the next layer of processing so as not to overload the end computing device (presumably a cell phone or tablet) with a volume of data that isn’t required to enable effective action on behalf of the data consumer.

The rest of this article will focus on providing a mental model for how to think about the relationship across the various kinds of models that may make up the future state of AI.

Starting with a “Real World” example

Having spent a good portion of my time off traveling across the U.S., while I had a printed road atlas in my car, I was reminded of the trust I place in Google Maps more than once, particularly when driving through an “open range” gravel road with cattle roaming about in northwest Nebraska on my way to South Dakota.  In many ways, navigation software represents a good starting point for where I believe intelligent applications will eventually go in the business environment.

Maps is useful as a tool because it synthesizes what data is has on roads and navigation options with specific information like my chosen destination, location, speed traps, delays, and accident information that is specific to my potential routes, allowing for a level of customization if I prefer to take routes that avoid tolls and so on.  From an end-user perspective, it provides a next recommended action, remaining contextually relevant to where I am and what I need to do, along with how long it will be both until that action needs to be taken as well as the distance remaining and time I should arrive at my final destination.

In a connected setting, navigation software pulls pieces of its overall model and applies data on where I am and where I’m going, to (ideally) help me get where I’m going as efficiently as possible.  The application is useful because it is specific to me, to my destination, and to my preferred route, and is different than what would be delivered to a car immediately behind me, despite leveraging the same application and infrastructure.  This is the direction I believe we need to go with intelligent applications, to drive individual productivity and effectiveness

Introducing the “Tree of Knowledge” concept

The Overall Model

The visual above is meant to represent the relationship of general-purpose and foundational models to what ultimately are delivered to an end-user (or piece of digital equipment) in a distributed fashion.

Conceptually, I think of the relationship across data sets as if it were a tree. 

  • The general-purpose model (e.g., LLM) provides the trunk that establishes a foundation for downstream analytics
  • Domain-specific models (e.g., RAG) act as the branches that rely on the base model (i.e., the trunk) to provide process- or function-specific capabilities that can span a number of end-user applications, but have specific, targeted outcomes in mind
  • A “micro”-model is created when specific branches of the tree are deployed to an end-user based on their profile. This represents the subset that is relevant to that data consumer given their role, permissions, experience level, etc.
  • The data available at the end point (e.g., mobile device) then provides the leaves that populate the branches of the “micro”-models that have been deployed to create an adaptive model used to inform the end user and drive meaningful and productive action.

The adaptive model should also take into account user preferences (via customization options) and personalization to tune their experience as closely as possible to what they need and how they work.

In this way, the progression of models moves from general to very specific, end-user focused solutions that are contextualized with real-time data much the same as the navigation example above.

It is also worth noting that, in addition to delivering these capabilities, the mobile device (or endpoint) may collect and send data back to further inform and train the knowledge models by domain (e.g., process performance data) and potentially develop additional branches based on gaps that may surface in execution.

Applying the Model

Having set context on the overall approach, there are some notable differences from how these capabilities could create a different experience and level of productivity than today, namely:

  • Rather than delivering content and transactional capabilities based on an end-user’s role and persona(s), those capabilities would be deployed to a user’s device (the branches of the “micro”-model), but synthesized with other information (the “leaves”) like the user’s experience level, preferences, location, training needs, equipment information (in a manufacturing-type context), to generate an interface specific to them that continually evolves to optimize their individual productivity
  • As new capabilities (i.e., “branches”) are developed centrally, they could be deployed to targeted users and their individual experiences would adapt to incorporate in ways that work best for them and their given configuration, without having to relearn the underlying application(s)

Going Back to Navigation

On the last point above, a parallel example would be the introduction of weather information into navigation. 

At least in Google Maps, while there are real-time elements like speed traps, traffic delays, and accidents factored into the application, there is currently no mechanism to recognize or warn end users about significant weather events that also may surface along the route.  In practice, where severe weather is involved, this could represent safety risk to the traveler and, in the event that the model was adapted to include a “branch” for this kind of data, one would hope that the application would behave the same from an end-user standpoint, but with the additional capability integrated into the application.

Wrapping Up

Understanding that we’re still early in the exploration of how AI will change the way we work, I believe that defining a framework for how various types of models can integrate and work across purposes would enable significant value and productivity if designed effectively.

I hope the ideas were worth considering.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 09/14/2024

On “Delivering at Speed”

Context

In my article on Excellence by Design, the fifth dimension I reference is “Delivering at Speed”, basically understanding the delicate balance to be struck in technology with creating value on a predictable, regular basis while still ensuring quality.

One thing that is true is software development is messy and not for the faint of heart or risk averse.  The dynamics of a project, especially if you are doing something innovative, tend to be in flux at a level that requires you to adapt on the fly, make difficult decisions, accept tradeoffs, and abandon the idea that “perfection” even exists.  You can certainly try to design and build out an ivory tower concept, but the probability is that you’ll never deliver it or, in the event you do, that it will take you so long to make it happen that your solution will be obsolete by the time you finally go live.

To that end, this article is meant to share a set of delivery stories from my past.  In three cases, I was told the projects were “impossible” or couldn’t be delivered at the outset.  In the other two, the level of complexity or size of the challenge was relatively similar, though they weren’t necessarily labeled “impossible” at any point.  In all cases, we delivered the work.  What will follow, in each case, are the challenges we faced, what we did to address them, and what I would do differently if the situation happened again.  Even in success, there is ample opportunity to learn and improve.

Looking across these experiences, there is a set of things I would say apply in nearly all cases:

  1. Commitment to Success
    • It has to start here, especially with high velocity or complex projects. When you decide you’re going to deliver day one, every obstacle is a problem you solve, and not a reason to quit.  Said differently, if you are continually looking for reasons to fail, you will.
  2. Adaptive Leadership
    • Change is part of delivering technology solutions. Courageous leadership that embraces humility, accepts adversity, and adapts to changing conditions will be successful far more than situations where you hold onto original assumptions beyond their usefulness
  3. Business/Technology Collaboration
    • Effective communication, a joint investment in success, and partnership make a significant difference in software delivery. The relationships and trust it takes can be an achievement in itself, but the quality of the solution and ability to deliver is definitely stronger where this is in place
  4. Timely Decision Making
    • As I discuss in my article On Project Health and Transparency, there are “points of inflection” that occur on projects of any scale. Your ability to respond, pivot, and execute in a new direction can be a critical determinant to delivering
  5. Allowing for the Unknown
    • With any project of scale or reasonable complexity, there will be pure “unknown” (by comparison with “known unknown”) that is part of the product scope, the project scope, or both. While there is always a desire to deliver solutions as quickly as possible with the lowest level of effort (discussed in my article Fast and Cheap Isn’t Good), including some effort or schedule time proactively as contingency for the unknown is always a good idea
  6. Excessive Complexity
    • One thing that is common across most of these situations is that the approach to the solution was building a level of flexibility or capability that probably was beyond what was needed in practice. This is not unusual on new development, especially where significant funding is involved, because those bells and whistles create part of the appeal that justifies the investment to begin with.  That being said, if a crawl-walk-run type approach that evolves to a more feature-rich solution is possible, the risk profile for the initial efforts (and associated cost) will likely be reduced substantially.  Said differently, you can’t generate return on investment for a solution you never deliver

The remainder of this article is focused on sharing a set of these delivery stories.  I’ve purposefully reordered and made them a bit abstract in the interest of maintaining a level of confidentiality on the original efforts involved.  In practice, these kinds of things happen on projects all the time, the techniques referenced are applicable in many situations, and the specifics aren’t as important.

 

Delivering the “Impossible”

Setting the Stage

I’ll always remember how this project started. I had finished up my previous assignment and was wondering what was next.  My manager called me in and told me about a delivery project I was going to lead, that it was for a global “shrink wrap” type solution, that a prototype had been developed, that I needed to design and build the solution… but not to be too concerned because the timeframe was too short, the customer was historically very difficult to work with, and there was “no way to deliver the project on time”.  Definitely not a great moment for motivating and inspiring an employee, but my manager was probably trying to manage my expectations given the size of the challenge ahead.

Some challenges associated with this effort:

  1. The nature of what was being done, in terms of automating an entirely manual process had never been done before. As such, the requirements didn’t exist at the outset, beyond a rudimentary conceptual prototype that demonstrated the desired user interface behavior
  2. I was completely unfamiliar with the technology used for the prototype and needed to immediately assess whether to continue forward with it or migrate into a technology I knew
  3. The timeframe was very aggressive and the customer was notorious for not delivering on time
  4. We needed everything we developed to fit on a single 3.5-inch diskette for distribution, and it was not a small application to develop

What Worked

Regardless of any of the mechanics, having been through a failed delivery early in my career, my immediate reaction to hearing the project was doomed from the outset was that there was no way we were going to allow that to happen.

Things that mattered in the delivery:

  1. Within the first two weeks, I both learned the technology that was used for the prototype and was able to rewrite it (it was a “smoke and mirrors” prototype) into a working, functional application. Knowing that the underlying technology could do what was needed in terms of the end user experience, we took the learning curve impact in the interest of reducing the development effort that would have been required to try and create a similar experience using other technologies we were using at the time
  2. Though we struggled initially, eventually we brought a leader from the customer team to our office to work alongside us (think Agile before Agile) so we could align requirements with our delivery iterations and produce application sections for testing in relatively complete form
  3. We had to develop everything as compact as possible given the single disk requirement so the distribution costs wouldn’t escalate (10s of thousands of disks were being shipped globally)

All of the above having helped, the thing that made the largest difference was the commitment of the team (ultimately me and three others) to do whatever was required to deliver.  The brute force involved was substantial, and we worked increasing hours, week-after-week, until we pulled seven all night sessions in the last ten days leading up to shipping the software to the production company.  It was an exceptionally difficult pace to sustain, but we hit the date, and the “impossible” was made possible.

What I Would Change

While there is a great deal of satisfaction that comes from meeting a delivery objective, especially an aggressive one, there are a number of things I would have wanted to do differently in retrospect:

  1. We grew the team over time in a way that created additional pressure in the latter half of the project. Given we started with no requirements and were doing something that had never been done, I’m not sure how we could have estimated the delivery to know we needed more help sooner, but minimally, from a risk standpoint, there was too much work spread too thinly for too long, and it made things very challenging later on to catch up
  2. As I mentioned above, we eventually transitioned to have an integrated business/technology team that delivered the application with tight collaboration. This should have happened sooner, but we collectively waited until it became a critical issue before it escalated to a level than anyone really addressed it.  That came when we actually ran out of requirements late one night (it was around 2am) to the point that we needed to stop development altogether.  The friction this created between the customer and development team was difficult to work through and is something the change in approach made much better, just too late in the project
  3. From a software standpoint, given it was everyone on the team’s first foray into new technology (starting with me), there was a lot we could have done to design the solution better, but we were unfortunately learning on the job. This is another one that I don’t know how we could have offset, beyond bringing in an expert developer to help us work through the design and sanity check our work, but it was such new technology at the time, that I don’t know that was really a viable option or that such expertise was available
  4. This was also my first global software solution and I didn’t appreciate the complexities of localization enough to avoid making some very basic mistakes that showed up late in the delivery process.

 

Working Outside the Service Lines

Setting the Stage

This project honestly was sort of an “accidental delivery”, in that there was no intention from a services standpoint to take on the work to begin with.  Similar to my product development experience, there was a customer need to both standardize and automate what was an entirely manual process.  Our role, and mine in particular, was to work with the customer, understand the current process across the different people performing it, then look for a way to both standardize the workflow itself (in a flexible enough way that everyone could be trained and follow that new process), then to define the opportunities to automate it such that a lot of the effort done in spreadsheets (prone to various errors and risks) could be built into an application that would make it much easier to perform the work.

The point of inflection came when we completed the process redesign and, with no implementation partner in place (and no familiarity with the target technologies in the customer team), the question became “who is going to design and build this solution?”  Having a limited window of time, a significant amount of seasonal business that needed to be processed, and the right level of delivery experience, I offered to shift from a business analyst to the technology lead on the project.  With a substantial challenge ahead, I was again told what we were trying to do could never be done in the time we had.  Having had previous success in that situation, I took it as a challenge to figure out what we needed to do to deliver.

Some challenges associated with this effort:

  1. The timeframe was the biggest challenge, given we had to design and develop the entire application from scratch. The business process was defined, but there was no user interface design, it was built using relatively new technology, and we needed to provide the flexibility users were used to having in Excel while still enforcing a new process
  2. Given the risk profile, the customer IT manager assumed the effort would fail and consequently provided only a limited amount of support and guidance until the very end of the project, which created some integration challenges with the existing IT infrastructure
  3. Finally, given that there were technology changes occurring in the market as a whole, we encountered a limitation in the tools (given the volume of data we were processing) that nearly caused us to hit a full stop mid-development

What Worked

Certainly, an advantage I had coming into the design and delivery effort was that I helped develop the new process and was familiar with all the assumptions we made during that phase of the work.  In that respect, the traditional disconnect between “requirements” and “solution” was fairly well mitigated and we could focus on how to design the interface, not the workflow or data required across the process.

Things that mattered in the delivery:

  1. One major thing that we did well from the outset was work in a prototype-driven approach, engaging with the end customer, sketching out pieces of the process, mocking them up, confirming the behavior, then moving onto the next set of steps while building the back end of the application offline. Given we only had a matter of months, the partnership with the key business customer and their investment in success made a significant difference in the efficiency of our delivery process (again, very Agile before Agile)
  2. Despite the lack of support from customer IT leadership standpoint, a key member of their team invested in the work, put in a tremendous amount of effort, and helped keep morale positive despite the extreme hours we worked for essentially the entire duration of the project
  3. While not as pleasant, another thing that contributed to our success was managing performance actively. Wanting external expertise (and needing the delivery capacity), we pulled in additional contracting help, but had inconsistent experience with the commitment level of the people we brought in.  Simply said: you can’t deliver high velocity project with a half-hearted commitment.  It doesn’t work.  The good news is that we didn’t delay decisions to pull people where the contributions weren’t where they needed to be
  4. On the technology challenges, when serious issues arose with our chosen platform, I took a fairly methodical approach to isolating and resolving the infrastructure issues we had. The result was a very surgical and tactical change to how we deployed the application without needing to do a more complex (and costly) end user upgrade that initially appeared to be our only option

What I Would Change

While the long hours and months without a day off ultimately enabled us to deliver the project, there were certainly learnings from this effort that I took away despite our overall success.

Things I would have wanted to do differently in retrospect:

  1. While the customer partnership was very effective overall, one area that where we didn’t engage early enough was with the customer analytics organization. Given the large volume of data, heavy reliance on computational models, and the capability for users to select data sets to include in the calculations being performed, we needed more support than expected to verify our forecasting capabilities were working as expected.  This was actually a gap in the upstream process design work itself, as we identified the desired capability (the “feature”) and where it would occur within the workflow, but didn’t flesh out the specific calculations (the “functionality”) that needed to be built to support it.  As a result, we had to work through those requirements during the development process itself, which was very challenging
  2. From a technology standpoint, we assumed a distributed approach for managing data associated with the application. While this reduced the data footprint for individual end users and simplified some of the development effort, it actually made the maintenance and overall analytics associated with the platform more complex.  Ultimately, we should have centralized the back end of the application.  This is something that was done subsequent to the initial deployment, though I’m not certain if we would have been able to take that approach with the initial release and still made the delivery date
  3. From a services standpoint, while I had the capability to lead the design and delivery of the application, the work itself was outside the core service offerings of our firm. Consequently, while we delivered for the customer, there wasn’t an ability to leverage the outcome for future work, which is important in consulting in building your business.  In retrospect, while I wouldn’t have learned and gotten the experience, we should have engaged a partner in the delivery and played a different role in implementation

 

Project Extension

Setting the Stage

Early in my experience of managing projects, I had the opportunity to take on an effort where the entire delivery team was coming off a very difficult, long project.  I was motivated and wanted to deliver, everyone else was pretty tired.  The guidance I received at the outset was not to expect very much, and that the road ahead was going to be very bumpy.

Some challenges associated with this effort:

  1. As I mentioned above, the largest challenge was a lack of motivation, which was a strength in other high pressure deliveries I’d encountered before. I was unused to dealing with it from a leadership standpoint and didn’t address it as effectively as I should have
  2. From a delivery standpoint, the technical solution was fairly complex, which made the work and testing process challenging, especially in the timeframe we had for the effort
  3. At a practical level, the team was larger than I had previous experience leading. Leading other leaders wasn’t something I had done before, which led me to making all the normal mistakes that comes with doing so for the first time, which didn’t help on either efficiency or sorely needed motivation

What Worked

While the project started with a team that was already burned out, the good news is that the base application was in place, the team understood the architecture, and the scope was to buildout existing capabilities on top of a reasonably strong foundation.  There was a substantial amount of work to be performed in a relatively short timeframe, but the good news is that we weren’t starting from scratch and there was recent evidence that the team could deliver.

Things that mattered in the delivery:

  1. The client partnership was strong, which helped both in addressing requirements gaps and, more importantly, in performing customer testing in both an efficient and effective manner given the accelerated timeframe
  2. At the outset of the effort, we revisited the detailed estimates and realigned the delivery team to balance the work more effectively across sub-teams. While this required some cross-training, we reduced overall risk in the process
  3. From a planning standpoint, we enlisted the team to try out an aggressive approach where we set all the milestones slightly ahead of their expected delivery date. Our assumption was that, by trying to beat our targets, we could create some forward momentum that would create “effort reserve” to use for unexpected issues and defects later in the project
  4. Given the pace of the work and size of the delivery team, we had the benefit of strong technical leads who helped keep the team focused and troubleshoot issues as and when we encountered them

What I Would Change

Like other projects I’m covering in this article, the team put in the effort to deliver on our commitments, but there were definitely learnings that came through the process.

Things I would have wanted to do differently in retrospect:

  1. Given it was my first time leading a larger team under a tight timeline, I pushed where I should have inspired. It was a learning experience that I’ve used for the benefit of others many times since.  While I don’t know what impact it might have had on the delivery itself, it might have made the experience of the journey better overall
  2. From a staffing standpoint, we consciously stuck to the team that helped deliver the initial project. Given the burnout was substantial and we needed to do a level of cross-training anyway, it might have been a good idea for us to infuse some outside talent to provide fresh perspective and much needed energy from a development standpoint
  3. Finally, while it was outside the scope of work itself, this project was an example of a situation I’ve encountered a few times over the years where the requirements of the solution and its desired capabilities were overstated and translated into a lot of complexity in architecture and design. My guess is that we built a lot of flexibility that wasn’t required in practice

 

Modernization Program

Setting the Stage

What I think of as another “impossible” delivery came with a large-scale program that started off with everyone but the sponsors assuming it would fail.

Some challenges associated with this effort:

  1. The largest thing stacked against us was two failed attempts to deliver the project in the past, with substantial costs associated with each. Our business partners were well aware of those failures, some having participated in them, and the engagement was tentative at best when we started to move into execution
  2. We also had significant delivery issues with our primary technology partner that resulted in them being transitioned out mid-implementation. Unfortunately, they didn’t handle the situation gracefully, escalated everywhere, told the CIO the project would never be successful, and the pressure on the team to hit the first release on schedule was increased by extension
  3. From an architecture standpoint, the decision was made to integrate new technology with existing legacy software wherever possible, which added substantial development complexity
  4. The scale and complexity for a custom development effort was very significant, replacing multiple systems with one new, integrated platform, and the resulting planning and coordination was challenging
  5. Given the solution replaced existing production systems, there was a major challenge in keeping capabilities in sync between the new application and ongoing enhancements being implemented in parallel by the legacy application delivery team

What Worked

Things that mattered in the delivery:

  1. As much as any specific decision or “change”, what contributed to the ultimate success of the program was our continuous evolution of the approach as we encountered challenges. With a program of the scale and complexity we were addressing, there was no reasonable way to mitigate the knowledge and requirements risks that existed at the outset.  What we did exceptionally well was to pivot and work through obstacles as they appeared… in architecture, requirements, configuration management, and other aspects of the work.  That adaptive leadership was critical in meeting our commitments and delivering the platform
  2. The decision to change delivery partners was a significant disruption mid-delivery that we managed with a weekly transition management process to surface and address risks and issues on an ongoing basis. The governance we applied was very tight across all the touchpoints into the program and it helped us ultimately onboard the new partner and reduce risk on the first delivery date, which we ultimately met
  3. To accelerate overall development across the program, we created both framework and common components teams, leveraging reuse to help reduce risk and effort required in each of the individual product teams. While there was some upfront coordination to decide how to arbitrate scope of work, we reduced the overall effort in the program substantially and could, in retrospect, have built even more “in common” than we did
  4. Finally, to keep the new development in sync with the current production solutions, we integrated the program with ongoing portfolio management processes from work-intake and estimation through delivery as if we were already in production. This helped us avoid rework that would have come if we had to retrofit those efforts post-development in the pre-production stage of the work

The net result of a lot of adjustments and a very strong, committed set of delivery teams was that we met our original committed launch date and moved into the broader deployment of the program.

What I Would Change

The learnings from a program of this scale could constitute an article all on their own, so I’ll focus on a subset that were substantial at an overall level.

Things I would have wanted to do differently in retrospect:

  1. As I mentioned in the point on common components above, the mix between platform and products wasn’t right. Our development leadership was drawn from the legacy systems, which helped, given they were familiar with the scope and requirements, but the downside was that the new platform ended up being siloed in a way that mimicked the legacy environment.  While we started to promote a culture of reuse, we could have done a lot more to reduce scope in the product solutions and leverage the underlying platform more
  2. Our product development approach should have been more framework-centric, being built towards broader requirements versus individual nuances and exceptions. There was a considerable amount of flexibility architected into the platform itself, but given the approach was focused on implementing every requirement as if it was an exception, the complexity and maintenance cost of the resulting platform was higher than it should have been
  3. From a transition standpoint, we should have replaced our initial provider earlier, but given the depth and nature of their relationships and a generally risk-averse mindset, we gave them a matter of months to fail, multiple times, before making the ultimate decision to change. Given there was a substantial difference in execution once we completed transition, we waited longer than we should have
  4. Given we were replacing multiple existing legacy solutions, there was a level of internal competition that was unhealthy and should have been managed more effectively from a leadership standpoint. The impact was that there were times the legacy teams were accelerating capabilities on systems we knew were going to be retired in what appeared to be an effort to undermine the new platform

 

Project Takeover

Setting the Stage

We had the opportunity in consulting to bid on a development project from our largest competitor that was stopped mid-implementation.  As part of the discovery process, we received sample code, testing status, and the defect log at the time the project was stopped.  We did our best to make conservative assumptions on what we were inheriting and the accuracy of what we received, understanding we were in a bidding situation and had to lean into discomfort and price the work accordingly.  In practice, the situation we took over was far worse than expected.

Some challenges associated with this effort:

  1. While the quality of solution was unknown at the outset, we were aware of a fairly high number of critical defects. Given the project didn’t complete testing and some defects were likely to be blocking the discovery of others, we decided to go with a conservative assumption that the resulting severe defect count could be 2x the set reported to us.  In practice the quality was far worse and there were 6x more critical defects than were reported to us at the bidding stage
  2. In concert with the previous point, while the testing results provided a mixed sense of progress, with some areas being in the “yellow” (suggesting a degree of stability) and others in the “red” (needing attention), in practice, the testing regimen itself was clearly not thorough and there wasn’t a single piece of the application that was better than a “red” status, most of them more accurately “purple” (restart from scratch), if such a condition even existed.
  3. Given the prior project was stopped, there was a very high level of visibility, and the expectation to pick up and use what was previously built was unrealistic to a large degree, given the quality of work was so poor
  4. Finally, there was considerable resource contention with the client testing team not being dedicated to the project and, consequently, it became very difficult to verify the solution as we progressed through stabilizing the application and completing development

What Worked

While the scale and difficulty of the effort was largely underrepresented at the outset of the work, as we dug in and started to understand the situation, we made adjustments that ultimately helped us stabilize and deliver the program.

Things that mattered in the delivery:

  1. Our challenges in testing aside, we had the benefit of a strong client partnership, particularly in program management and coordination, which helped given the high level of volatility we had in replanning as we progressed through the effort
  2. Given we were in a discovery process for the first half of the project, our tracking and reporting methods helped manage expectations and enable coordination as we continued to revise the approach and plan. One specific method we used was showing the fix rate in relation to the level of undiscovered defects and then mapping that additional effort directly to the adjusted plan.  When we visibly accounted for it in the schedule, it helped build confidence that we actually were “on plan” where we had good data and making consistent progress where we had taken on additional, unforeseen scope as well.  Those items were reasonably outside our control as a partner, so the transparency helped us given the visibility and pressure surrounding the work was very high
  3. Finally, we mitigated risk in various situations by making the decision to rewrite versus fix what was handed over at the outset of the project. The code quality being as poor as it was and requirements not being met, we had to evaluate whether it was easier to start over and work with a clean slate versus trying to reverse engineer something we knew didn’t work.  These decisions helped us reduce effort and risk, and ultimately deliver the program

What I Would Change

Things I would have wanted to do differently in retrospect:

  1. As is probably obvious, the largest learning was that we didn’t make conservative enough assumptions in what we were inheriting with the project, the accuracy of testing information provided, or the code samples being “representative” of the entire codebase. In practice, though, had we estimated the work properly and attached the actual cost for doing the project, we might not have “sold” the proposal either…
  2. We didn’t factor changing requirements into our original estimates properly, partially because we were told the project was mid-testing, and largely built prior to our involvement. This added volatility into the project as we already needed to stabilize the application without realizing the requirements weren’t frozen.  In retrospect, we should have done a better job probing on this during the bidding process itself
  3. Finally, we had challenges maintaining momentum where a dedicated client testing team would have made the iteration process more efficient. It may have been necessary to lean on augmentation or a partner to help balance ongoing business and the project, but the cost of extending the effort was substantial enough that it likely was worth investigating

 

Wrapping Up

As I said at the outset, having had the benefit of delivering a number of “impossible” projects over the course of my career, I’ve learned a lot about how to address the mess that software development can be in practice, even with disciplined leadership.  That being said, the great thing about having success is that it also tends to make you a lot more fearless the next time a challenge comes up, because you have an idea what it takes to succeed under adverse conditions.

I hope the stories were worth sharing.  Thanks for spending the time to read them.  Feedback is welcome as always.

-CJG 02/07/2023