The challenges involved in managing a technology footprint today at any medium to large organization are very high for a multitude of reasons:
Proliferation of technologies and solutions that are disconnected or integrated in inconsistent ways, making simplification or modernization efforts difficult to deliver
Mergers and acquisitions that bring new systems into the landscape that aren’t rationalized with or migrated to existing systems, creating redundancy, duplication of capabilities, and cost
“Speed-to-market” initiatives involving unique solution approaches that increase complexity and cost of ownership
A blend of in-house and purchased software solutions, hosted across various platforms (including multi-cloud), increasing complexity and cost of security, integration, performance monitoring, and data movement
Technologies advancing at a rate, especially with the introduction of artificial intelligence (AI), that organizations can’t integrate them quickly enough to do so in a consistent manner
Decentralized or federated technology organizations that operate with relative autonomy, independent of standards, frameworks, or governance, which increases complexity and cost
The result of any of the above factors can be enough cost and complexity that the focus within a technology organization can shift from innovation and value creation to struggling to keep the lights on and maintaining a reliable and secure operating environment.
This article will be the first in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”. The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.
Why It Matters
Before getting into the dimensions of the future state, I wanted to first clarify how these technology challenges manifest themselves in meaningful ways, because complexity isn’t just an IT problem, it’s a business issue, and partnership is important in making thoughtful choices in how we approach future solutions.
Lost Productivity
A leadership team at a manufacturing facility meets first thing in the morning. It is the first of multiple they will have throughout the course of a day. They are setting priorities for the day collectively because the systems that support them: a combination of applications, analytics solutions, equipment diagnostics, and AI tools, are all providing different perspectives on priorities and potential issues, but in disconnected ways, and it is now on the leadership team to decide which of these should receive attention and priority in the interest of making their production targets for a day. Are they making the best choices in terms of promoting efficiency, quality, and safety? There’s no way to know.
Is this an unusual situation? Not at all. Today’s technology landscape is often a tapestry of applications with varied levels of integration and data sharing, data apps and dashboards meant to provide insights and suggestions, and now AI tools to “assist” or make certain activities more efficient for an end user.
The problem is what happens when all these pieces end up on someone’s desktop, browser, or mobile device and they are left to copy data from one solution to the other, arbitrate which of various alerts and notifications is most important, identify dependencies to help make sure they are taking the right actions in the right sequence (in a case like directed work activity), and quite often that time is lost productivity in itself, regardless of which path they take, which may amplify the impact further, given retention and/or high turnover are real issues in some jobs that reduce the experience available to navigate these challenges successfully.
Lower Profitability
The result of this lost productivity and ever-expanding technology footprint is both lost revenue (to the extent it hinders production or effective resource utilization) and higher operating cost, especially to the degree that organizations introduce the next new thing without retiring or replacing what was already in place, or integrating things effectively. Speed-to-market is a short-term concept that tends to cause longer-term cost of ownership issues (as I previously discussed in the article “Fast and Cheap Isn’t Good”), especially to the degree that there isn’t a larger blueprint in place to make sure such advancements are done in a thoughtful, deliberate manner.
To this end, how we do something can be as important as what we intend to do, and there is an argument for thinking through the operating implications when undertaking new technology efforts with a more holistic mindset than a single project tends to take in my experience.
Lost Competitive Advantage
Beyond the financial implications, all of the varied solutions, accumulated technologies and complexity, and custom or interim band aids built to connect one solution to the next eventually catches up in a form of what one organization used to refer to as “waxy buildup” that prevents you from moving quickly on anything. What seems on paper to be a simple addition or replacement becomes a lengthy process of analysis and design that is cumbersome and expensive, where the lost opportunity is speed-to-market in an increasingly competitive marketplace.
This is where the new market entrants thrive and succeed, because they don’t carry the legacy debt and complexity of entrenched market players who are either too slow to respond or too resistant to change to truly transform at a level that allows them to sustain competitive advantage. Agility gives way to a “death by a thousand paper cuts” of tactical decisions made that were appropriate and rational in the moment, but created significant amounts of technical debt that inevitably must be paid.
A Vision for the Future
So where does this leave us? Pack up the tent and go home? Of course not.
We are at a significant inflection point with AI technology that affords us the opportunity to examine where we are and to start adjusting our course to a more thoughtful and integrated future state where AI, applications, and data and analytics solutions work in concert and harmony with each other versus in a disconnected reality of confusion.
It begins with the consumers of these capabilities, supported by connected ecosystems of intelligent applications, enabled by insights, agents, and experts, that infuse intelligence into making people productive, businesses agile and competitive, and improve value derived from technology investments at a level disproportionate to what we can achieve today.
The remaining articles in this series will focus on various dimensions of what the above conceptual model means, as a framework, in terms of AI, applications, and data, and then how we approach that transition and think about it from an IT organizational perspective.
Up Next: Establishing the Framework for the future…
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
Having had this blog for nearly four years, I took a look at the nature of the articles written to date, and subjects included therein, wondering if there were any patterns that emerged. I found the resulting chart (above) interesting as a reflection of the relative importance I associate with certain topics overall. To that end, I thought I’d provide some perspective on what’s been written to date before moving to the next article, whatever that may be.
Leadership and Culture
The two largest focus areas were leadership and culture, which isn’t surprising given I’ve worked for many years across corporate and consulting environments and have seen the relative impact that both can have on organizational performance on the whole. Nearly two-thirds of my articles to date touch on leadership and one-half on culture, because they are fundamental to setting the stage for everything else you want to accomplish.
In the case of organizational excellence, courageous leadership has to be at the top of the list, given that difficult decisions and a level of fearlessness are required to achieve great things. By contrast, hesitancy and complacency will almost always lead to suboptimized results, because there will be apprehension about innovating, challenging the status quo, and effectively managing relationships where the ability to be a partner and advisor may require difficult conversations at times.
With leadership firmly rooted, it becomes possible to establish a culture that promotes integrity, respect, collaboration, innovation, productivity, and results. Where one or more of these dimensions is missing, it is nearly impossible to be effective without compromising performance somewhere. That isn’t to say that you can’t deliver in an unhealthy environment, you certainly can and many organizations do. It is very likely, however, that those gains will be short-lived and difficult to repeat or sustain because of the consequential impact of those issues on the people working in those conditions over time. In this case, the metrics will likely tell the tale, between delivery performance, customer feedback, solution quality, and voluntary attrition (to name a few).
Delivery and Innovation
With the above foundation in place, the next two areas of focus were delivery and innovation, which is reassuring given that I believe strongly in the concept of actionable strategy versus one that is largely theoretical in nature. Having worked in environments that leaned heavily on innovation without enough substantive delivery as well as ones that delivered consistently but didn’t innovate enough, the answer is to ensure both are occurring on a continual basis and managed in a very deliberate way.
Said differently, if you innovate without delivering, you won’t create tangible business value. If you deliver without ever innovating, at some point, you will lose competitive advantage or risk obsolescence in some form or other.
The Role of Discipline
While not called out as a topic in itself, in most cases where I discuss delivery or IT operations, I mention discipline as well, because I believe it is a critical component of pursuing excellence in anything. The odd contradiction that exists, is the notion that having discipline somehow implies bureaucracy or moving slowly, when the reality is the exact opposite.
Without defined, measurable, and repeatable processes, it is nearly impossible to drive continuous improvement and establish a more predictable operating environment over time. From a delivery standpoint, having methodology isn’t about being prescriptive to the point that you lose agility, as an example, it’s about having an understood approach that you can estimate and plan effectively. It also defines rules of engagement within and across teams so that you can partner and execute efficiently in a repeatable fashion. Having consistent processes also allows for monitoring, governing, and improving the efficiency and efficacy of how things are done over time.
The same could be said for leveraging architectural frameworks, common services, and design patterns as well. There is a cost for establishing these things, but if you amortize these investments over time, they ultimately improve speed, reduce risk, improve quality, and thereby reduce TCO and complexity of an environment once they are in place. This is because every team doesn’t invent their own way of doing things, ultimately creating complexity that needs to be maintained and supported down the road. Said differently, it would be very difficult to have reliable estimation metrics when you never do something in a consistent way and analyze variance.
Mental Models and Visualization
The articles also reflect that I prefer having a logical construct and visualizations to organize, illustrate, analyze, and evaluate complex situations, such as AI and data strategy, workforce and sourcing strategy, digital manufacturing facilities, and various other situations. Any of these topics involve many dimensions and layers of associated complexity. Having a mental model, whether it is a functional decomposition, component model, or some other framework, is helpful for both identifying the dimensions of a problem, and also surfacing dependencies and relationships in the interest of driving transformation.
Visualizations also can help facilitate alignment across broader groups of stakeholders where a level of parallel execution is required, making dependencies and relationships more evident and easier to coordinate.
Wrapping Up
Overall, the purpose of writing this article was simply to pause and reflect on what has become a fairly substantive body of work over the last several years, along with recognizing the themes that reoccur time and again because they matter when excellence is your goal. Achieving great things consistently is a byproduct of having vision, effective leadership, discipline, commitment, and a lot of tenacity.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
A new leader in an organization once asked to understand my role. My answer was very simple: “My role is to change mindsets.”
I’m fairly sure the expectation was something different: a laundry list of functional responsibilities, goals, in-flight activities or tasks that were top of mind, the makeup of my team, etc. All relevant aspects of a job, to be sure, but not my primary focus.
I explained that my goal was to help transform the organization, and if I couldn’t change people’s mindsets, everything else that needed to be done was going to be much more difficult. That’s how it is with change.
Complacency is the enemy. Excellence is a journey and you are never meant to reach the destination.
Having been part of and worked with organizations that enjoyed tremendous market share but then encountered adversity and lost their advantage, there were common characteristics, starting with basking in the glow of that success too long and losing the hunger and drive that made them successful in the first place.
The remainder of this article will explore the topic further in three dimensions: leadership, innovation, and transformation in the interest of providing some perspective on the things to look for when excellence is your goal.
Fall short of excellence, you can still be great. Try to be great and fail? You’re going to be average… and who wants to be part of something average? No one who wants to win.
Courageous Leadership
As with anything, excellence has to start with leadership. There is always resistance and friction associated with change. That’s healthy and good because it surfaces questions and risks and, in a perfect world, the more points of view you can leverage in setting direction, the more likely you’ll avoid blind spots or avoidable mistakes just for a lack of awareness or understanding of what you are doing.
There is a level of discipline needed to accomplish great things over time and courage is a requirement, because there will inevitably be challenges, surprises, and setbacks. How leaders respond to that adversity, through their adaptability, tenacity and resilience will ultimately have a substantial influence on what is possible overall.
Some questions to consider:
Is there enough risk tolerance to create space to try new ideas, fail, learn, and try again?
Is there discipline in your approach so that business choices are thoughtful, reasoned, intentional, measured, and driven towards clear outcomes?
Is there a healthy level of humility to understand that, no matter how much success there is right now, without continuing to evolve, there will always be a threat of obsolescence?
Relentless Innovation
In my article on Excellence by Design, I was deliberate in choosing the word “relentless” in terms of innovation, because I’ve seen so many instances over time of the next silver bullet meant to be a “game changer”, “disruptor”, etc. only to see that then be overtaken by the next big thing a year or so later.
One of the best things about working in technology is thatit constantly gives us opportunities to do new things: to be more productive and effective, produce better outcomes, create more customer value, and be more competitive.
Some people see that as a threat, because it requires a willingness to continue to evolve, adapt, and learn. You can’t place too much value on a deep understanding of X technology, because tomorrow Y may come along and make that knowledge fairly obsolete. While there is an aspect of that argument that is true at an implementation level, it gives too much importance to the tools and not enough to the problems we’re ultimately trying to solve, namely creating a better customer experience, delivering a better product or service, and so on.
We need to plan like the most important thing right now won’tbe the most important 6 months or even a year from now. Assume we will wantto replace it, or integrate something new to work with it, improving our overall capability and creating even more value over time.
What does that do? In a disciplined environment, it should change our mindset about how we approach implementing new tools and technologies in the first place. It should also influence how much exposure we create in the dependencies we place upon those tools in the process of utilizing them.
To take what could be a fairly controversial example: I’ve written multiple articles on Artificial Intelligence (AI), how to approach it, and how I think about it in various dimensions, including where it is going. The hype surrounding these technologies is deservedly very high right now, there is a surge in investment, and a significant number of tools are and will be hitting the market. It’s also reasonable to assume a number of “agentic” solutions will pop up, meant to solve this problem and that… ok… now what happens then? Are things better, worse, or just different? What is the sum of an organization that is fully deployed with all of the latest tools? I don’t believe we have any idea and I also believe it will be terribly inefficient if we don’t ask this question right now.
As a comparison, what history has taught us is that there will be a user plugged into these future ecosystems somewhere, with some role and responsibilities, to work in concert (and ideally in harmony) with all this automation (physical and virtual) that we’ve brought to bear on everyone’s behalf. How will they make sense of it all? If we drop an agent for everything, is it any different than giving someone a bunch of new applications, all of which spit recommendations and notifications and alerts at them, saying “this is what you need to do”, but leaving them to figure out which of those disconnected pieces of advice make the most sense, which should be the priority, and try somehow not to be overwhelmed? Maybe not, because the future state might be a combination of intelligent applications (something I wrote about in The Intelligent Enterprise) and purpose-built agents that fill gaps those applications don’t cover.
Ok, so why does any of that matter? I’m not making an argument against experimenting and leveraging AI. My point is that, every time there is surge towards the next technology advancement, we seldom think about the reality that it will eventually evolve or be replaced by something else and we should take that into consideration as we integrate those new technologies to begin with. The only constant is change and that’s a good thing, but we also need to be disciplined in how we think about it on an ongoing basis.
Some questions to consider:
Is there a thoughtful and disciplined approach to innovation in place?
Is there a full lifecycle-oriented view when introducing new technologies, to consider how to integrate them so they can be replaced or to retire other existing, potentially redundant solutions once they are introduced?
Are the new technologies being vetted, reviewed, and integrated as part of a defined ecosystem with an eye towards managing technical debt over time?
Continual Transformation
In the spirit of fostering change, it is very common for a “strategy” conversation to be rooted in a vision. A vision sets the stage for what the future environment is meant to look like. It is ideally compelling enough to create a clear understanding of the desired outcome and to generate momentum in the pursuit of that goal (or set of goals)… and experience has taught me this is actually NOT the first or only thing important to consider in that first step.
Sustainable change isn’t just about having a vision, it is about having the right culture.
The process for strategy definition isn’t terribly complicated at an overall level: define a vision, understand the current state, identify the gaps, develop a roadmap to fill those gaps, execute, adapt, and govern until you’re done.
The problem is that large transformation efforts are extremely difficult to deliver. I don’t fundamentally believe that difficulty is often rooted in the lack of a clear vision or as simple as having execution issues that ultimately undermine success. I believe successful transformation isn’t a destination to begin with. Transformation should be a continual journey towards excellence.
How that excellence is manifest can be articulated through one or more “visions” that communicate concepts of the desired state, but that picture can and will evolve as capabilities available through automation, process, and organizational change occur. What’s most important is having courageous leadership and the innovation mindset mentioned above, but also a culture driven to sustain that competitive advantage and hunger for success.
Said differently: With the right culture, you can likely accomplish almost any vision, but only some visions will be achievable without the right culture.
Some questions to consider in this regard:
Is there a vision in place for where the organization is heading today?
What was the “previous” vision, what happened to it, did it succeed or fail and, if so, why?
Is the current change viewed as a “project” or a “different way of working”? (I would argue the latter is the desired state nearly in all cases)
Wrapping Up
Having shared the above thoughts, it’s difficult to communicate what is so fundamental to excellence, which is the passion it takes to succeed in the first place.
Excellence is a choice. Success is a commitment. It takes tenacity and grit to make it happen and that isn’t always easy or popular.
There is always room to be better, even in some of the most mundane things we do every day. That’s why courageous leadership is so important and where culture becomes critical in providing the foundation for longer-term success.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
First of all, know that character, values, and integrity matter. They are the foundation of who you are and the reputation you will have with others. Our beliefs and intentions often make their way to our words and actions, so strive to do what’s right, treat others with respect, take accountability for your choices, and know that, in the long term, those who bring kindness and a positive attitude into the world will succeed far more than those who don’t. They will also find themselves surrounded by many others, because a good heart and kindness are forces that will attract others to you over time.
Have faith, no matter what life brings. There will be times when life is challenging and it’s important to know that we are never alone, that God (in all His forms) has a plan, and we will find our way through, as long as we take one day at a time and keeping moving forward. There is a great quote from Winston Churchill, “When you’re going through Hell, keep going”, that I always remember in this regard. Faith is our greatest source of hope… and with hope, anything can be possible. With faith and hope, your possibilities in life will be limited only by your capacity to dream.
Work hard. It’s a simple point, but one that isn’t evident to everyone at a time that many seem to feel entitled. Earning your success is both an exercise in diligence and commitment as well as persistence and leadership. Oftentimes that effort is not glamorous, requires sacrifice, and will drag you through difficulty, but in struggling and overcoming those obstacles, we find out who we are and the strength we have inside us. No one can give that confidence and experience to you, you simply have to earn it, and it is well worth the effort over time.
Never stop learning. There is always something to understand about other people, new ideas or subjects, and the world around us. Always be looking for the people who can guide and advise you in the different aspects of your life. You will never reach a point where there isn’t an opportunity to grow as a person, and it will make you so much more aware, fulfilled, and worth knowing as time goes on.
Believe in yourself and speak your truth. In the great debate that life can be at times, you should know that your voice matters. At a time when so many take a free pass and just parrot the words, ideology, or biases of others, you do yourself and the world a service to educate yourself, form your own opinion, and respectfully speak your truth, including the times you speak for those who are afraid to do so on their own. Diversity in thought and opinion gives us strength and creates room for change. Let your voice be heard. You can make a difference.
Be humble and be kind. In concert with the previous point, strive to listen as well as you speak. Seek compassion and understanding, including those who differ most from you. They have their own form of truth, and it can be worth learning what that is, whether you agree with it or not. In a world consumed with egocentric thinking, what we do for others brings the world a little closer, creates the connections that bind us together, and reduces the divisiveness that so many waste their days promoting.
Never give up on your dreams but be ready to pursue new ones when you see them. Life can be like a series of bridges, taking you from one part of your journey to the next, and we often can’t see past the bridge that is immediately in front of us. While it takes tenacity and courage to pursue your life’s passion, understand that your goals will evolve as time progresses, and that’s not a bad thing.
Build upon your successes, learn from your failures. Remember that it’s relatively easy to succeed when you’re not doing anything worth doing or that’s not particularly difficult. Again, this is a relatively simple point, but it’s easy to lose the perspective that failures are a means to learn and become better, and they are definitely something that come with taking risks in life. There is no benefit to beating yourself up endlessly over your mistakes. Be thankful for the opportunity to learn and move forward, or likely life will give you the opportunity to learn that lesson again down the road.
Understand that true leaders emerge in adversity. Aspire to be the light that can lead others out of darkness to a better place, whether that is in your personal or professional life. It is easy to lead when everything is going well. It is when things go wrong that poor leaders assign blame and make excuses, and strong leaders take the reins, solve problems, and seek to inspire. It’s a choice that takes courage, but it’s worth remembering that it is also where character is built, reputations are made, and results are either accomplished or not.
Accept that life is rarely what we expect it to be. It’s the journey, along with its peaks and valleys, that makes it so worthwhile. Where possible, the best you can do for yourself and for others is to know when to set aside distractions, be present, and engage in the moments you have throughout your day. Make the most of the experience and don’t be a passenger in your own life.
Finally, take the time to express your care for those who matter to you. Life is unpredictable and you will never run out of love to give to others who are truly deserving of it. We spend far too much time waiting for “the right moment” when that time could be right now. Express your gratitude, express your love, express your support… both you and whomever is the recipient of those things will be better for it, and you will have an endless supply of those gifts available to give tomorrow as well, so no need to hold them in reserve.
I hope the words were helpful… all the best in the steps you take, in the choices you make, in finding happiness, and living the life of your dreams.
Having led and participated in many workshops and facilitated sessions over time, I wanted to share some thoughts on what tends to make them effective.
Unfortunately, there can be a perception that assembling a group of people in a room with a given topic (for any length of time) can automatically produce collaboration and meaningful outcomes. This is definitely not the case.
Before getting into the key dimensions, I suppose a definition of a “workshop” is worthwhile, given there can be many manifestations of what that means in practice. From my perspective, a workshop is a set of one or more facilitated sessions of any duration with a group of participants that is intended to foster collaboration and produce a specified set of outcomes. By this definition, a workshop could be as short as a one-hour meeting and also span many days. The point is that it is facilitated, collaborative, and produces results.
By this definition, a meeting used to disseminate information is not a workshop. A “training session” could contain a workshopcomponent, to the degree there are exercises that involve collaboration and solutioning, but in general, they would not be considered workshops because they are primarily focused on disseminating information.
Given the above definition, there are five factors that are necessary for successful workshop:
Demonstrating Agility and Flexibility
Having the Appropriate Focus
Ensuring the Right Participation
Driving Engagement
Creating Actionable Outcomes
Demonstrating Agility and Flexibility
Workshops are fluid, evolving things, where there is an ebb and flow to the discussion and to the energy of the participants. As such, beyond any procedural or technical aspect of running a workshop, it’s critically important to think about and to be aware of the group dynamics and to adjust the approach as needed.
What works:
Soliciting feedback on the agenda, objectives, and participants in advance, both to make adjustments as needed, but also to identify potential issues that could arise in the session itself
Doing pulse checks on progress and sentiment throughout to identify adjustments that may be appropriate
Asking for feedback after a session to identify opportunities for improvement in the future
What to watch out for:
The tone of discussion from participants, level of engagement, and other intangibles can tend to signal that something is off in a session
Tactics to address: Call a break, pulse check the group for feedback
Topics or issues not on the agenda that arise multiple times and have a relationship to the overall objectives or desired outcomes of the session itself
Tactics to address: Adjusting the agenda to include a discussion on the relevant topic or issue. Surface the issue, put in a parking lot to be addressed either during or post-session
Priorities or precedence order of topics not aligning in practice to how they are organized in the session agenda
Tactics to address: Reorder the agenda to align the flow of discussion to the natural order of the solutioning. Insert a segment to provide a high-level end-to-end structure, then resume discussing individual topics. Even if out of sequence, that could help contextualize the conversations more effectively
Having the Appropriate Focus
Workshops are not suitable for every situation. Topics that involve significant amounts of research, rigor, investigation, cross-organizational input, or don’t require a level of collaboration, such as detailed planning, are better handled through offline mechanisms, where workshops can be used to review, solicit input, and align outcomes from a distributed process.
What works:
Scope that is relatively well-defined, minimally at a directional level, to enable brainstorming and effective solutioning
Conduct a kick off and/or provide the participants with any pre-read material required for the session up front, along with any expectations for “what to prepare” so they can contribute effectively
Choosing topics where the necessary expertise is available and can participate in the workshop
What to watch out for:
Unclear session objectives or desired outcomes
Tactics to address: Have a discussion with the session sponsor and/or participants to obtain the necessary clarity and send out a revised agenda/objectives as needed
Topics that are too broad or too vague to be shaped or scoped by the workshop participants
Tactics to address: Same as previous issue
An agenda that doesn’t provide a clear line of sight between the scope of the session or individual agenda items and desired outcomes
Tactics to address: Map the agenda topics to specific outcomes or deliverables and ensure they are connected in a tangible way. Adjust as needed
Ensuring the Right Participation
Workshops aren’t solely about producing content, they are about establishing a shared understanding and ownership. To that end, having the right people in the room to both inform the discussion and own the outcomes is critical to establishing momentum post-session
What works:
Ensuring the right level of subject matter expertise to address the workshop scope and objectives
Having cross-functional representation to identify implications, offer alternate points of view, challenge ideas, and suggest other paradigms and mental models that could foster innovation
Bringing in “outside” expertise to the degree that what is being discussed is new or there is limited organizational knowledge of the subject area where external input can enhance the discussion
What to watch out for:
People jumping in and out of sessions to the point that it either becomes a distraction to other participants or there is a loss of continuity and effectiveness in the session as a whole
Tactics to address: Manage the part-time participants deliberately to minimize disruptions. Realign sessions to try and organize their participation into consecutive blocks of time with continuous input rather than sporadic engagement or see what can be done to either solicit full participation or identify alternate contributors who can participate in a dedicated capacity.
There is a knowledge gap that makes effective discussion difficult to impossible. The lack of the right people in the discussion will tend to draw momentum out of a session
Tactics to address: Document and validate assumptions made in the absence of the right experts being present. Investigate participation of necessary subject matter experts in key sessions focused on their areas of contribution
Limiting participants to those who are “like minded”, which may constrain the outcomes
Tactics to address: Explore involving a more diverse group of participants to provide a means for more potential approaches and solutions
Driving Engagement
Having the right people in the room and the right focus is critical to putting the right foundation in place, but making the most of the time you have is where the value is created, and that’s all about energy and engagement.
What works:
Leveraging an experienced facilitator, who is both engaging and engaged. The person leading the workshop needs to have a contagious enthusiasm that translates to the participants
Ensuring an inclusive discussion where all members of the session have a chance to contribute and have their ideas heard and considered, even if they aren’t ultimately utilized
Managing the agenda deliberately so that the energy and focus in the discussion is what it needs to be to produce the desired outcomes
What to watch out for:
A lack of energy or lack of the right pace from the facilitator will likely reduce the effectiveness of the session
Tactics to address: Switch up facilitators as needed to keep the energy high, pulse check the group on how they feel the workshop is going and make adjustments as needed
A lack of collaboration or participation from all attendees
Tactics to address: Active facilitation to engage quieter voices in the room and to manage anyone who is outspoken or dominating discussion
A lack of energy “in the room” that is drawing down the pace or productivity of the session
Tactics to address: Calling breaks as needed to give the participants a disruption, balancing the amount of active engagement of the participants in the event there is too much “presentation” going on where information is being shared and not enough discussion occurring
Creating Actionable Outcomes
One of the worst experiences you can have is a highly energized session that builds excitement, but then leads to no follow up action. Unfortunately, I’ve seen and experienced this many times over the course of my career, and it’s very frustrating, both when you lead workshops and as a participant, when you spend your time and provide insights that ultimately go to waste. Workshops are generally meant to help launch, accelerate, and build momentum through collaboration. To the extent that a team comes together and uses a session to establish direction, it is critical that the work not go to waste, not only to make the most of that effort, but also to provide reassurance that future sessions will be productive as well. If workshops become about process without outcome, they will lose efficacy very quickly and people will stop taking them seriously as a mechanism to facilitate and accelerate change.
What works:
Tracking the completion of workshop objectives throughout the process itself and making adjustments to the outcomes as required
Leaving the session with clear owners of any next steps
Establishing a checkpoint post-session to take a pulse on where things stand on the outcomes, next steps, and recommended actions
What to watch out for:
Getting to the end of a workshop and having any uncertainty in terms of whether the session objectives were met
Tactics to address: Objectives should be reviewed throughout the workshop to ensure alignment of the participants and commitment to the desired outcomes. There shouldn’t be any surprises waiting by the end
Leaving a session not having identified owners of the next steps
Tactics to address: In the event that no one “signs up” to own next steps or the means to perform the assignment is unclear for some reason, the facilitator can offer to review the next steps with the workshop sponsor and get back to the group with how the next steps will be taken forward
Assigning ownership of next steps without any general timeframe in which those actions were intended to be taken
Tactics to address: Setting a checkpoint at a specified point post-session to understand progress, review conflicting priorities, clear barriers, etc.
Wrapping Up
Going back to the original reason for writing this article, I believe workshops are an invaluable tool for defining vision, designing solutions, and facilitating change. Taking steps to ensure they are effective, engaging, and create impact is what ultimately drives their value.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
In my most recent articles on Approaching Artificial Intelligence and Transformation, I highlight the importance of discipline in achieving business outcomes. To that end, governance is a critical aspect of any large-scale transformation or delivery effort because it both serves to reduce risk and inform change on an ongoing basis, both of which are an inevitable reality of these kinds of programs.
The purpose of this article is to discuss ways to approach governance overall, to avoid common concerns, and to establish core elements that will increase the probability it will be successful. Having seen and established many PMOs and governance bodies over time, I can honestly say that they are difficult to put in place for as many intangible reasons as anything mechanical, hopefully the nature of which will be addressed below.
Have the Right Mindset
Before addressing the execution “dos” and “don’ts”, success starts with understanding that governance is about successful delivery, not pure oversight. Where delivery is the priority, the focus is typically on enablement and support. By contrast, where the focus is the latter, emphasis can be placed largely on controls and intervention. The reality is that both are needed, which will be discussed more below, but starting with an intention to help delivery teams generally should translate into a positive and supportive environment, where collaboration is encouraged. If, by comparison, the role of governance is relegated to finding “gotchas” and looking for issues without providing teams will guidance or solutions, the effort likely won’t succeed. Healthy relationships and trust are critical to effective governance, because they encourage transparent and open dialogue. Without that, likely the process will break down or be ineffective somewhere along the way.
In a perfect world, delivery teams should want to participate in a governance process because it helps them do their work.
Addressing the Challenges
Suggesting that you want to initiate a governance process can be a very uncomfortable conversation. As a consultant, clients can feel like it is something being done “to” them, with a third-party reporting on their work to management. As a corporate citizen, it can feel like someone is trying to exercise a level of control over their peers in a leadership team and, consequently, limiting individual autonomy and empowerment in some way. This is why relationships and trust are critically important. Governance is a partnership and it is about increasing the probability of successful outcomes, not adding a layer of management over people who are capable of doing their jobs with the right level of support.
That being said, three things are typically said when the idea of establishing governance is introduced: that it will slow things down, hinder value creation, and add unnecessary overhead to teams that are already “too busy” or rushing to a deadline. I’ll focus on each of these in turn, along with what can be done to address the concerns in how you approach things.
It Slows Things Down
As I wrote in my article on Excellence by Design, delivering at speed matters. Lack of oversight can lead to efforts going off the rails without the timely interventions and support that cause delays and budget overruns. That being said, if the process slows everything down, you aren’t necessarily helping teams deliver either.
A fundamental question is whether your governance process is meant to be a “gate” or a “checkpoint”.
In the case of a gate, they can be very disruptive, so there should be compliance or risk-driven concerns (e.g., security or data privacy) that necessitate stopping or delaying some or all of a project until certain defined criteria or standards are met. If a process is gated, then this should be factored into estimation and planning at the outset, so expectations are set and managed accordingly, and to avoid the “we don’t have time for this” discussion that otherwise could happen. Gating criteria and project debriefs / retrospectives should also be reviewed to ensure standards and guidelines are updated to help both mitigate risk and encourage accelerated delivery, which is a difficult balance to strike. In principle, the more disciplined an environment is, the less “gating” should be needed, because teams are already following standards, doing proper quality assurance, and so on, and risk management should be easier on an average effort.
When it comes to “checkpoints”, there should be no difference in terms of the level of standards and guidelines in place, it’s about how they are handled in the course of the review discussion itself. When critical criteria are missed in a gate, there is a “pause and adjust” approach, whereas a checkpoint would note the exception and requested remedy, ideally along with a timeframe for doing so. The team is allowed to continue forward, but with an explicit assumption that they will make adjustments so the overall solution integrity is maintained in line with expectations. This is where a significant amount of technical debt and delivery issues are created. There is a level of trust involved in a checkpoint process, because the delivery team may choose not to remediate any issues, in which case the purpose and value of standards can be undermined, and a significant amount of complexity and risk is introduced as a result. If this becomes a pattern over time, it may make sense to shift towards a more gated process if things like security, privacy, or other critical issues are being created.
Again, the goal of governance is to remove barriers, provide resources where required, and to enable successful delivery, but there is a handshake involved to the degree that the process integrity needs to be managed overall. My general point of view is to trust teams to do the right thing and to leverage a checkpoint versus a gated process, but that is predicated on ensuring standards and quality are maintained. To the delivery discipline isn’t where it needs to be, a stronger process may be appropriate.
It Erodes Value
To the extent that the process is perceived to be pure overhead, it is important to clarify the overall goals of the process and, to the extent possible, to identify some metrics that can be used to signal whether it is being effective in helping to promote a healthy delivery environment.
At an overall level, the process is about reducing risk, promoting speed and enablement, and increasing the probability of successful delivery. Whether that is measured in changes in budget and schedule variance, issues remediated pre-deployment, or by a downstream measure of business value created through initiatives delivered on time, there should be a clear understanding of what the desired outcomes are and a sanity check that they are being met.
Arguably, where standards are concerned, this can be difficult to evaluate and measure, but certainly the increase in technical debt that is created in an environment that lacks standards and governance, cost of operations, and percentage of effort directed and build versus run on an overall level can be monitored and evaluated.
It Adds Overhead
I remember taking an assignment to help clean up the governance of a delivery environment many years ago where the person leading the organization was receiving a stack of updates every week that was literally three feet of documents when printed, spanning hundreds of projects. It goes without saying that all of that reporting provided nothing actionable, beyond everyone being able to say that they were “reporting out” on their delivery efforts on an ongoing basis. It was also the case that the amount of time project and program managers were focused on updating all that documentation was substantial. This is not governance. This is administration and a waste of resources. Ultimately, by changing the structure of the process, defining standards, and level of information being reported, the outcome was a five-page summary that covered critical programs, ongoing maintenance, production, and key metrics that was produced with considerably less effort and provided much better transparency into the environment.
The goal of governance is providing support, not producing reams of documentation. Ideally, there should be a critical minimum amount of information requested from teams to support a discussion on what they are doing, where they are in the delivery process, the risks or challenges they are facing, and what help (if any) they may need. To the degree that you can leverage artifacts the team is already producing so there is little to no extra effort involved in preparing for a discussion, even better. And, as another litmus test, everything included in a governance discussion should serve a purpose and be actionable. Anything else likely is a waste of time and resources.
Making Governance Effective
Having addressed some of the common concerns and issues, there are also things that should be considered that increase the probability of success.
Allow for Evolution
As I mentioned in the opening, the right mindset has a significant influence on making governance successful. Part of that is understanding it will never be perfect. I believe very strongly in launching governance discussions and allowing feedback and time to mature the process and infrastructure given real experience with what works and what everyone needs.
One of the best things that can be done is to track and monitor delivery risks and technology-related issues and use those inputs to guide and prioritize the standards and guidelines in place. Said differently, you don’t need governance to improve things you already do well, you leverage it (primarily) to help you address risks and gaps you have and to promote quality.
Having seen an environment where a team was “working on” establishing a governance process over an extended period of time versus one that was stood up inside 30 days, I’d rather have the latter process in place and allow for it to evolve than one that is never launched.
Cover the Bases
In the previous section, I mentioned leveraging a critical minimum amount of information to facilitate the process, ideally utilizing artifacts a team already has. Again, it’s not about the process, it’s about the discussion and enabling outcomes.
That being said, since trust and partnership are important, even in a fairly bare bones governance environment, there should be transparency into what the process is, when it should be applied, who should attend, expectations of all participants, and a consistent cadence with which it is conducted.
It should be possible to have ad-hoc discussions if needed, but there is something contradictory to suggesting that governance is a key component to a disciplined environment and not being able to schedule the discussions themselves consistently. Anecdotally, when we conducted project review discussions in my time at Sapient, it was commonly understood that if a team was ever “too busy” to schedule their review, they probably needed to have it as soon as possible, so the reason they were overwhelmed or too busy was clear.
Satisfy Your Stakeholders
The final dimension to consider in making governance effective is understanding and satisfying the stakeholders surrounding it, starting with the teams. Any process can and should evolve, and that evolution should be based on experience obtained executing the process itself, monitoring operating metrics on an ongoing basis, and feedback that is continually gathered to make it more effective.
That being said, if the process never surfaces challenges and risks, it likely isn’t working properly, because governance is meant to do exactly that, along with providing teams with the support they need. Satisfying stakeholders doesn’t mean painting an unrealistically positive picture, especially if there are fundamental issues in the underlying environment.
I have seen situations where teams were encouraged to share inaccurate information about the health of their work in the interest of managing perceptions and avoiding difficult conversations that were critically needed. This is why having an experienced team leading the conversations and a healthy, supportive, and trusting environment is so important. Governance is needed because things do happen in delivery. Technology work is messy and complicated and there are always risks that materialize. The goal is to see them and respond before they have consequential impact.
Wrapping Up
Hopefully I’ve managed to hit some of the primary points to consider when establishing or evaluating a governance process. There are many dimensions, but the most important ones are first, focusing on value and, second, on having the right mindset, relationships, and trust. The process is too often the focus, and without the other parts, it will fail. People are at the center of making it work, nothing else.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
In my first blog article back in 2021, I wrote that “we learn to value experience only once we actually have it”… and one thing I’ve certainly realized is that it’s much easier to do something quickly than to do it well. The problem is that excellence requires discipline, especially when you want to scale or have sustainable results, and that often comes into conflict with a natural desire to achieve speed in delivery.
There is a tremendous amount of optimism in the transformative value AI can create across a wide range of areas. While much continues to be written about various tools, technologies, and solutions, there is value in having a structured approach to developing AI strategy and how we will govern it once it is implemented across an organization.
Why? We want results.
Some historical examples on why there is a case for action:
Many organizations have leveraged SharePoint as a way to manage documents. Because it’s relatively easy to use, access to the technology generally is provided to a broad set of users, with little or no guidance on how to use it (e.g., metatagging strategy), and over time there becomes a sprawl of content that may contain critical, confidential, or proprietary information with limited overall awareness of what exists and where
In the last number of years, Citizen Development has become popular, with the rise of low code, no code, and RPA tools, creating accessibility to automation that is meant to enable business (and largely non-technical) resources to rapidly create solutions, from the trivial to relatively complex. Quite often these solutions aren’t considered part of a larger application portfolio, are managed with little or no oversight, and become difficult to integrate, leverage, or support effectively
In data and analytics, tools like Alteryx can be deployed across a broad set of users who, after they are given access to requested data sources, create their own transformations, dashboards, and other analytical outputs to inform ongoing business decisions. The challenge occurs when the underlying data changes, is not understood properly (and downstream inferences can be incorrect), or these individuals leave or transition out of their roles and the solutions they built are not well understood or difficult for someone else to leverage or support
What these situations have in common is the introduction of something meant to serve as an enabler that has relative ease of use and accessibility across a broad audience, but where there also may be a lack of standards and governance to make sure the capabilities are introduced in a thoughtful and consistent manner, leading to inefficiency, increased cost, and lost opportunity. With the amount of hype surrounding AI, the proliferation of tools, and general ease of use that they provide, the potential for organizations to create a mess in the wake of their experimentation with these technologies seems very significant.
The focus of the remainder of this article is to explore some dimensions to consider in developing a strategy for the effective use and governance of AI in an organization. The focus will be on the approach, not the content of an AI strategy, which can be the subject of a later article. I am not suggesting that everything needs to be prescriptive, cumbersome, or bureaucratic to the point that nothing can get done, but I believe it is important to have a thoughtful approach to avoid the pitfalls that are common to these situations.
To the extent that, in some organizations, “governance” implies control versus enablement or there are historical real or perceived IT delivery issues, there may be concern with heading down this path. Regardless of how the concepts are implemented, I believe they are worth considering sooner rather than later, given we are still relatively early in the adoption process of these capabilities.
Dimensions to Consider
Below are various aspects of establishing a strategy and governance process for AI that are worth consideration. I listed them somewhat in a sequential manner, as I’d think about them personally, though that doesn’t imply you can’t explore and elaborate as many as are appropriate in parallel, and in whatever order makes sense. The outcome of the exercise doesn’t need to be rigid mandates, requirements, or guidelines per se, but nearly all of these topics likely will come up implicitly or otherwise as we delve further into leveraging these technologies moving forward.
Lead with Value
The first dimension is probably the most important in forming an AI strategy, which is to articulate the business problems being solved and value that is meant to be created. It is very easy with new technologies to focus on the tools and not the outcomes and start implementing without a clear understanding of the impact that is intended. As a result, measuring the value created and governing the efficacy of the solutions delivered becomes extremely difficult.
As a person who does not believe in deploying technology for technology’s sake, identifying, tracking, and measuring impact is important in knowing we will ultimately make informed decisions in how we leverage new capabilities and invest in them appropriately over time.
Treat Solutions as Assets
Along the lines of the above point, there is risk associated with being consumed by what is “cool” versus what is “useful” (something I’ve written about previously), and treating new technologies like “gadgets” versus actual business solutions. Where we treat our investments as assets, the associated discipline we apply in making decisions surrounding them should be greater. This is particularly important in emerging technology because the desire to experiment and leverage new tools could quickly become unsustainable as the number of one-off solutions grows and is unsupportable, eventually draining resources from new innovation.
Apply a Lifecycle Mindset
When leveraging a new technical capability, I would argue that we should look for opportunities to think of the full product lifecycle when it comes to how we identify, define, design, develop, manage, and retire solutions. In my experience, the identify (finding new tools) and develop (delivering new solutions) aspects of the process receive significant emphasis in a speed-to-market environment, but the others much less so, and often to the overall detriment of an organization when they quickly are saddled with the resulting technical debt that comes from neglecting some of the other steps in the process. This doesn’t necessarily imply a lot of additional steps, process overhead, or time/effort to be expended, but there is value created in each step of a product lifecycle (particularly in the early stages) and all of them need to be given due consideration if you want to establish a sustainable, performant environment. The physical manifestation of some these steps could be as simple as a checklist to make sure there aren’t blind spots that arise later on that were avoidable or that create business risk.
Define Operating Model
Introducing new capabilities, especially ones where the barrier to entry/ease of use allows for a wide audience of users can cause unintended consequences if not managed effectively. While it’s tempting to draw a business/technology dividing line, my experience has been that there can be very technically capable business consumers of technology and very undisciplined technologists who implement it as well. The point of thinking through the operating model is to identify roles and responsibilities in how you will leverage new capabilities so that expectations and accountability is clear, along with guidelines for how various teams are meant to collaborate over the lifecycle mentioned above.
Whether the goal is to “empower end users” by fully distributing capabilities across teams, with some level of centralized support and governance, or fully centralizing with decentralized demand generation (or any flavor in between), the point is to understand who is best positioned to contribute at different steps of the process and promote consistency to an appropriate level so performance and efficacy of both the process and eventual solutions is something you can track, evaluate, and improve over time. As an example, it would likely be very expensive and ineffective to hire a set of “prompt engineers” that operate in a fully distributed manner in a larger organization by comparison with having a smaller, centralized set of highly skilled resources who can provide guidance and standards to a broader set of users in a de-centralized environment.
Following onto the above, it is also worthwhile to decide whether and how these kinds of efforts should show up in a larger portfolio management process (to the extent one is in place). Where AI and agentic solutions are meant to displace existing ways of working or produce meaningful business outcomes, the time spent delivering and supporting these solutions should likely be tracked so there is an ability to evaluate and manage these investments over time.
Standardize Tools
This will likely be one of the larger issues that organizations face, particularly given where we are with AI in a broader market context today. Tools and technologies are advancing at such a rapid rate that having a disciplined process for evaluating, selecting, and integrating a specific set of “approved” tools is and will be challenging for some time.
While asking questions of a generic large language model like ChatGPT, Grok, DeepSeek, etc. and changing from one to the other seems relatively straightforward, there is a lot more complexity involved when we want to leverage company-specific data and approaches like RAG to produce more targeted and valuable outcomes.
When it comes to agentic solutions, there is also a proliferation of technologies at the moment. In these cases, managing the cost, complexity, performance, security, and associated data privacy issues will also become complex if there aren’t “preferred” technologies in place and “known good” ways in which they can be leveraged.
Said differently, if we believe effective use of AI is critical to maintaining competitive advantage, we should know that the tools we are leveraging are vetted, producing quality results, and that we’re using them effectively.
Establish Critical Minimum Documentation
I realize it’s risky to use profanity in a professional article, but documentation has to be mentioned if we assume AI is a critical enabler for businesses moving forward. Its importance can probably be summarized if you fast forward one year from today, hold a leadership meeting, and ask “what are all the ways we are using artificial intelligence, and is it producing the value we expected a year ago?” If the response contains no specifics and supporting evidence, there should be cause for concern, because there will be significant investment made in this area over the next 1-2 years, and tracking those investments is important to realizing the benefits that are being promised everywhere you look.
Does “documentation” mean developing a binder for every prompt that is created, every agent that’s launched, or every solution that’s developed? No, absolutely not, and that would likely be a large waste of money for marginal value. There should be, however, a critical minimum amount of documentation that is developed in concert with these solutions to clarify their purpose, intended outcome/use, value to be created, and any implementation particulars that may be relevant to the nature of the solution (e.g. foundational model, data sets leveraged, data currency assumptions, etc.). An inventory of the assets developed should exist, minimally so that it can be reviewed and audited for things like security, compliance, IP, and privacy-related concerns where applicable.
Develop Appropriate Standards
There are various types of solutions that could be part of an overall AI strategy and the opportunity to develop standards that promote quality, reuse, scale, security, and so forth is significant. Whether it takes the form of a “how to” guide for writing prompts, to data sourcing and refresh standards with RAG-enabled solutions, reference architecture and design patterns across various solution types, or limits to the number of agents that can be developed without review for optimization opportunities… In this regard, something pragmatic, that isn’t overly prescriptive but that also doesn’t reflect a total lack of standards would be appropriate in most organizations.
In a decentralized operating environment, the chance that solutions will be developed in a one-off fashion, with varying levels of quality, consistency, and standardization is highly probable and that could create issues with security, scalability, technical debt, and so on. Defining the handshake between consumers of these new capabilities and those developing standards, along with when it is appropriate to define them, could be important things to consider.
Design Solutions
Again, as I mentioned in relation to the product lifecycle mindset, there can be a strong preference to deliver solutions without giving much thought to design. While this is often attributed to “speed to market” and a “bias towards action”, it doesn’t take long for tactical thinking to lead to a considerable amount of technical debt, an inability to reuse or scale solutions, or significant operating costs that start to slow down delivery and erode value. These are avoidable consequences when thought is given to architecture and design up front and the effort nearly always pays off over time.
Align to Data Strategy
This topic could be an article in itself, but suffice is to say that having an effective AI strategy is heavily dependent on an organization’s overall data strategy and the health of that portfolio. Said differently: if your underlying data isn’t in order, you won’t be able to derive much in terms of meaningful insights from it. Concerns related to privacy and security, data sourcing, stewardship, data quality, lineage and governance, use of multiple large language models (LLMs), effective use of RAG, the relationship of data products to AI insights and agents, and effective ways of architecting for agility, interoperability, composability, evolution, and flexibility are all relevant topics to be explored and understood.
Define and Establish a Governance Process
Having laid out the above dimensions in terms of establishing and operationalizing an AI strategy, there needs to be a way to govern it. The goal of governance is to achieve meaningful business outcomes by promoting effective use and adoption of the new capabilities, while managing exposure related to introducing change into the environment. This could be part of an existing governance process or set up in parallel and coordinated with others in place, but the point is that you can’t optimize what you don’t monitor and manage, and the promise of AI is such that we should be thoughtful about how we govern its adoption across an organization.
Having worked both in consulting and corporate environments for many years and across multiple industries, we’re at an interesting juncture in how technology is leveraged at a macro-level and the broader business and societal impacts of those choices. Whether that is AI-generated content that could easily be mistaken for “real” events to the leverage of data collection and advanced analytics in various business and consumer scenarios to create “competitive advantage”.
The latter scenario has certainly been discussed for quite a while, whether it is in relation to managing privacy while using mobile devices, trusting a search engine and how they “anonymize” your data before potentially selling it to third parties, or whether the results presented as the outcome of a search (or GenAI request) is objective and unbiased, or being presented with some level of influence given the policies or leanings of the organization sponsoring it.
The question to be explored here is: How do we define the ethical use of technology?
For the remainder of this article, I’ll suggest some ways to frame the answer in various dimensions, acknowledging that this isn’t a black-and-white issue and the specifics of a situation could make some of the considerations more or less relevant.
Considerations
What Ethical Use Isn’t
Before diving into what I believe could be helpful in framing an answer, I wanted to clarify what I don’t consider a valid approach, which is namely the argument used in a majority of cases where individuals or organizations cross a line: We can use technology in this way because it gives us competitive advantage.
Competitive advantage can tend to be an easy argument in the interest of doing something questionable, because there is a direct or indirect financial benefit associated with the decision, thereby clouding the underlying ethics of the decision itself. “We’re making more money, increasing shareholder value, managing costs, increasing profitability, etc.” are things that tend to move the needle in terms of the roadblocks that can exist in organizations and mobilizing new ideas. The problem is that, with all of the data collected in the interest of securing approval and funding for initiative, I haven’t seen many cases where there is a proverbial “box to check” in terms of the effort conforming to ethical standards (whether that’s specific to technology use or otherwise).
What Ethical Use Could Be
That point having been stated, below are some questions that could be considered as part of an “ethical use policy”, understanding that not all may have equal weight in an evaluation process.
They are:
Legal/Compliance/Privacy
Does the ultimate solution conform to existing laws and regulations for your given industry?
Is there any pending legislation related to the proposed use of technology that could create such a compliance issue?
Is there any industry-specific legislation that would suggest a compliance issue if it were logically applied in a new way that relates to the proposed solution?
Would the solution cause a compliance issue in another industry (now or in legislation that is pending)? Is there risk of that legislation being applied to your industry as well?
Transparency
Is there anything about the nature of the solution that, were it shared openly (e.g., through a press release or industry conference/trade show) would cause customers, competitors, or partners/suppliers to raise issues with the organization’s market conduct or end user policies? This can be a tricky item given the previous points on competitive advantage and what might be labeled as “trade secret” but potentially violate anti-trust, privacy, or other market expectations
Does anything about the nature of the solution, were it to be shared openly, suggest that it could cause trust issues between customers, competitors, suppliers, or partners with the organization and, if so, why?
Cultural
Does the solution align to your organization’s core values? As an example, if there is a transparency concern (above) and “Integrity” is a core value (which it is in many organizations), why does that conflict exist?
Does the solution conform to generally accepted practices or societal norms in terms of business conduct between you and your target audience (customers, vertical or horizontal partners, etc.)?
Social Responsibility
Does the solution create any potential issues from an environmental, societal, or safety standpoint that could have adverse impacts (direct or indirect)?
Autonomy and Objectivity
Does the solution provide an unbiased, fact-based (or analytically-correct) outcome, free of any potential bias, that can also be governed, audited, and verified? This is an important dimension to consider given the dependency we have on automation continues to increase and we want to be able to trust the security, reliability, accuracy, and so on of what that technology provides.
Competitive
If a competitor announced they were developing a solution of exactly the same nature as what is proposed, would it be comfortable situation or something that you would challenge as unethical or unfair business practice in any way? Quite often, the lens through which unethical decisions are made is biased with an internal focus. If that line of sight were reversed and a competitor was open about doing exactly the same thing, would that be acceptable or not? If there would be issues, likely there might be cause for concern in developing the solution yourself
Wrapping Up
From a process standpoint, a suggestion would be to take the above list and discuss it openly in the interest of not only determining the right criteria for you, but also to establish where these opportunities exist (because they do and will, the more analytics and AI-focused capabilities advance). Ultimately, there should be a check-and-balance process for ethical use of technology in line with any broader compliance and privacy-related efforts that may exist within an organization today.
Ultimately, the “right thing to do” can be a murky and difficult question to answer, especially with ever-expanding tools and technologies that create capabilities a digital business can use to its advantage. But that’s where culture and values should still exist, not simply because there is or isn’t a compliance issue, but because reputations are made and reinforced over time through these kinds of decisions, and they either help build a brand or can damage it when the right questions aren’t explored at the right time.
It’s interesting to consider, as a final note, that most companies have an “acceptable use of IT” policy for employees, contractors, and so forth, in terms of setting guidelines for what they can or can’t do (e.g., accessing ‘prohibited’ websites / email accounts or using a streaming platform while at work), but not necessarily for technology directed outside the organization. As we enter a new age of AI-enabled capabilities, perhaps it’s a good time to look at both.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
In my recent article on Exploring Artificial Intelligence, I covered several dimensions of how I think about the direction of AI, including how models will evolve from general-purpose and broad-based to be more value-focused and end consumer-specific as the above diagram is intended to illustrate.
The purpose of this article is to dive a little deeper into a mental model for how I believe the technology could become more relevant and valuable in an end-user (or end consumer) specific context.
Before that, a few assertions related to the technology and end user application of the technology:
The more we can passively collect data in the interest of simplifying end-user tasks and informing models, the better. People can be both inconsistent and unreliable in how they capture data. In reality, our cell phones are collecting massive amounts of data on an ongoing basis that is used to drive targeted advertising and other capabilities to us without our involvement. In a business context, however, the concept of doing so can be met with significant privacy and other concerns and it’s a shame because, while there is data being collected on our devices regardless, we aren’t able to benefit from it in the context of doing our work
Moving from a broad- or persona-based means of delivering technology capabilities to a consumer-specific approach is a potentially significant advancement in enabling productivity and effectiveness. This would be difficult or impossible to achieve without leveraging an adaptive approach that synthesizes various technologies (personalization, customization, dynamic code generation, role-based access control, AI/ML models, LLMs, content management, and so forth) to create a more cohesive and personalized user experience
While I am largely focusing on end-user application of the technology, I would argue that the same concepts and approach could be leveraged for the next generation of intelligent devices and digital equipment, such as robotics in factory automation scenarios
To make the technology both performant and relevant, part of the design challenge is to continually reduce and refine to level of “model” information that is needed at the next layer of processing so as not to overload the end computing device (presumably a cell phone or tablet) with a volume of data that isn’t required to enable effective action on behalf of the data consumer.
The rest of this article will focus on providing a mental model for how to think about the relationship across the various kinds of models that may make up the future state of AI.
Starting with a “Real World” example
Having spent a good portion of my time off traveling across the U.S., while I had a printed road atlas in my car, I was reminded of the trust I place in Google Maps more than once, particularly when driving through an “open range” gravel road with cattle roaming about in northwest Nebraska on my way to South Dakota. In many ways, navigation software represents a good starting point for where I believe intelligent applications will eventually go in the business environment.
Maps is useful as a tool because it synthesizes what data is has on roads and navigation options with specific information like my chosen destination, location, speed traps, delays, and accident information that is specific to my potential routes, allowing for a level of customization if I prefer to take routes that avoid tolls and so on. From an end-user perspective, it provides a next recommended action, remaining contextually relevant to where I am and what I need to do, along with how long it will be both until that action needs to be taken as well as the distance remaining and time I should arrive at my final destination.
In a connected setting, navigation software pulls pieces of its overall model and applies data on where I am and where I’m going, to (ideally) help me get where I’m going as efficiently as possible. The application is useful because it is specific to me, to my destination, and to my preferred route, and is different than what would be delivered to a car immediately behind me, despite leveraging the same application and infrastructure. This is the direction I believe we need to go with intelligent applications, to drive individual productivity and effectiveness.
Introducing the “Tree of Knowledge” concept
The Overall Model
The visual above is meant to represent the relationship of general-purpose and foundational models to what ultimately are delivered to an end-user (or piece of digital equipment) in a distributed fashion.
Conceptually, I think of the relationship across data sets as if it were a tree.
The general-purpose model (e.g., LLM) provides the trunk that establishes a foundation for downstream analytics
Domain-specific models (e.g., RAG) act as the branches that rely on the base model (i.e., the trunk) to provide process- or function-specific capabilities that can span a number of end-user applications, but have specific, targeted outcomes in mind
A “micro”-model is created when specific branches of the tree are deployed to an end-user based on their profile. This represents the subset that is relevant to that data consumer given their role, permissions, experience level, etc.
The data available at the end point (e.g., mobile device) then provides the leaves that populate the branches of the “micro”-models that have been deployed to create an adaptive model used to inform the end user and drive meaningful and productive action.
The adaptive model should also take into account user preferences (via customization options) and personalization to tune their experience as closely as possible to what they need and how they work.
In this way, the progression of models moves from general to very specific, end-user focused solutions that are contextualized with real-time data much the same as the navigation example above.
It is also worth noting that, in addition to delivering these capabilities, the mobile device (or endpoint) may collect and send data back to further inform and train the knowledge models by domain (e.g., process performance data) and potentially develop additional branches based on gaps that may surface in execution.
Applying the Model
Having set context on the overall approach, there are some notable differences from how these capabilities could create a different experience and level of productivity than today, namely:
Rather than delivering content and transactional capabilities based on an end-user’s role and persona(s), those capabilities would be deployed to a user’s device (the branches of the “micro”-model), but synthesized with other information (the “leaves”) like the user’s experience level, preferences, location, training needs, equipment information (in a manufacturing-type context), to generate an interface specific to them that continually evolves to optimize their individual productivity
As new capabilities (i.e., “branches”) are developed centrally, they could be deployed to targeted users and their individual experiences would adapt to incorporate in ways that work best for them and their given configuration, without having to relearn the underlying application(s)
Going Back to Navigation
On the last point above, a parallel example would be the introduction of weather information into navigation.
At least in Google Maps, while there are real-time elements like speed traps, traffic delays, and accidents factored into the application, there is currently no mechanism to recognize or warn end users about significant weather events that also may surface along the route. In practice, where severe weather is involved, this could represent safety risk to the traveler and, in the event that the model was adapted to include a “branch” for this kind of data, one would hope that the application would behave the same from an end-user standpoint, but with the additional capability integrated into the application.
Wrapping Up
Understanding that we’re still early in the exploration of how AI will change the way we work, I believe that defining a framework for how various types of models can integrate and work across purposes would enable significant value and productivity if designed effectively.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
It’s impossible to scroll through a news feed and miss the energy surrounding AI and its potential to transform. The investment in technology as a strategic differentiator is encouraging to see, particularly as a person who thrives on change and innovation. It is, however, also concerning that the ways in which it is often described are reminiscent of other technology advances of the past… CRM, BigData, .com… where there was an immediate surge in spending without a clear set of outcomes in mind, operating approach, or business architecture established for how to leverage it effectively. Consequently, while a level of experimentation is always good in the interest of learning and exploring, a lot of money and time can be wasted (and technical debt created) without necessarily creating any meaningful business value through the process.
For the purposes of this article, I’m going to focus on five dimensions of AI and how I’m thinking about them:
Framing the Problem – Thinking about how AI will be used in practice
The Role of Compute – Considering the needs for processing moving forward
Revisiting Data Strategy – Putting AI in the context of the broader data landscape
Simplifying “Intelligence” – Exploring the end user impact of AI
Thinking About Multi-Cloud – Contemplating how to approach AI in a distributed environment
This topic is very extensive, so I’ll try to keep the thoughts at a relatively high-level to start and dive into more specifics in future articles as appropriate.
Framing the Problem
Considering the Range of Opportunities
While a lot of the attention surrounding Generative AI over the last year has been focused on content generation for research, communication, software development, and other purposes, I believe the focus for how AI can create business value will shift substantially to be more outcome-driven and directed at specific business problems. In this environment, smaller, more focused data sets (e.g., incorporating process, market, equipment, end user, and environmental data) will be analyzed to understand causal relationships in the interest of producing desired business outcomes (e.g., optimizing process efficiency, improving risk management, increasing safety) and content (e.g., just in time training, adaptive user experiences). Retrieval-Augmented Generation (RAG) models are an example of this today, with a purpose-built model leveraging a foundational large language model to establish context for a more problem-specific solution.
This is not to suggest that general purpose models will decline in utility, but rather that I believe those applications will be better understood, mature, and become integrated where they create the most value (in relatively short order). The focus will then shift towards areas where more direct business value can be obtained through an evolution of these technologies.
For that to occur, the fundamentals of business process analysis need to regain some momentum to overcome the ‘silver bullet’ mentality that seems largely prevalent with these technologies today. It is, once again, a rush towards “the cool versus the useful” towards my opening remark about how current AI discussions feel a lot like conversations at the start of the .com era, and the sooner we shift towards a disciplined approach to leveraging these technology advancements, the better.
The opportunity will be to look at how we can leverage what these models provide, in terms of understanding multi-dimensional relationships across large sets of data, but then extending the concept to become more deterministic in terms of what decisions under a given set of conditions are most likely to bring about desired outcomes (i.e., causal models). This is not where we are today, but is where I believe these technologies are meant to go in the near future. Ultimately, we don’t just want to produce content, we want to influence processes and business results with support from artificial intelligence.
As purpose-built models evolve, I believe there will be a base set of business insights that are made available across communities of end users, and then an emergence of secondary insights that are developed in a derivative fashion. In this way, rather than try to summit Mount Everest in a direct ascent, we will establish one of more layers of outcomes (analogous to having multiple base camps) that facilitate the eventual goal.
Takeaways
General purpose AI and large language models (LLMs) will continue to be important and become integrated with how we work and consume technology, but reach a plateau of usefulness fairly rapidly in the next year or so
Focus will shift towards integrating transactional, contextual, and process data with the intention of predicting business outcomes (causal AI) in a much more targeted way
The overall mindset will pivot from models that do everything to ones that do something much more specific, with a desired outcome in mind up front
The Role of Compute
Considering the Spectrum of Needs
Having set the context of general versus purpose-built AI and the desire to move from content to more outcome-focused intelligence, the question is what this means for computing. There is a significant amount of attention going to specialized processors (e.g., GPUs) at the moment, with the presumption that there are significant computing requirements to generate models based on large sets of data in a reasonable amount of time. That being said, while content-focused outcomes may be based on a large volume of data and only need to be refreshed on a periodic basis, the more we want AI to assist in the performance of day-to-day tasks, the more we need the insights to be produced on a near-real time basis and available on a mobile device or embedded on a piece of digital equipment.
Said differently, as our focus shifts to smaller and more specific business problems, so should the data sets involved, making it possible to develop purpose-built models on more “standard” computing platforms that are commercially available, where the models are refreshed on a more frequent basis, taking into account relevant environmental conditions, whether that’s production plans or equipment status in manufacturing, market conditions in financial services, or weather patterns or other risk factors in insurance.
The Argument for a “Micro-Model”
Assuming a purpose-built model can be developed with a particular business outcome or process in mind, where things could take an interesting leap forward would be extending those models into edge computing environments, like a digital worker in a manufacturing facility, where the specific end user knowledge and skills, geo-location, environmental conditions, equipment status could be fed into a purpose-built model and then extended to create a more adaptive model that provides a user-specific set of insights and instructions to drive productivity, safety, and effectiveness.
Ultimately, AI needs to be focused on the individual and run on something as accessible as a mobile device to truly realize it’s potential. The same would also be true for extending models that could be embedded within a piece of industrial equipment to run as part of a digital facility. That is beyond anything we can do today, it makes insights personalized and specific to an individual, and that concept holds a significant amount more business value than targeting a specific user group or persona from an application development standpoint. Said differently, the concept is similar to integrating personalization, workflow, presentation, and insights into one integrated technology.
With this in mind, perhaps the answer will ultimately still result in highly specialized computing, but before rushing in the direction of quantum computing and buying a significant number of GPUs, I’d definitely consider the ultimate outcome we want, which is to put the power of insights in the hands of end users in day-to-day activities, but being much more effective in what they are able to do. That is not a once-a-month refresh of a massive amount of data. It is a constantly evolving model that is based on learnings from the past, but the current realities and conditions of the moment and the specific individual taking action on those things.
Takeaways
Computing requirements will shift from centralized processing of large data volumes to smaller, curated data sets that are refreshed more often and targeted to specific business goals
Ultimately, the goal should be to enable end users with a highly personalized model that is focused on them, the tasks they need to accomplish, and the current conditions under which they are operating
Processing for artificial intelligence will therefore be distributed across a spectrum of environments from large scale centralized methods to distributed edge appliances and mobile devices
Revisiting Data Strategy
Business Implications of AI
The largest risk with artificial intelligence that I see today is no different than anything else in regards to data: the value of new technology is only as good as the underlying data quality, and that’s a business issue (for the most part).
Said differently, in the case of AI, if the underlying data sets upon which models are developed has data quality issues and there is a lack of data management and data governance in place, the inferences drawn will likely be of limited value.
Ultimately, the more we move from general purpose to purpose-built solutions, the ability to identify relevant and necessary data to be incorporated into a model can be a significant accelerator of value. This is because the “give me all the data” approach would likely both increase time to develop and produce models as well as introduce significant overhead in ensuring data quality and governance to confirm the usefulness of the resulting models.
If, as an example, I wanted to use AI to ingest all the training materials developed across a set of manufacturing facilities in the interest of synthesizing, standardizing, and optimizing them across an enterprise, the underlying quality of those materials becomes critically important in deriving the right outcomes. There may be out of date procedures, unique steps specific to a location, quality issues in some of the source content, etc. The technology itself doesn’t solve these issues. Arguably, a level of data wrangling or quality tools could be helpful in identifying and surfacing these issues, but the point is that data governance and curation are required before the infrastructure would produce the desired business outcomes.
Technology Implications of AI
As the diagram intends to indicate, whether AI lives as a set of intelligent agents that run as separate stove pipes in parallel with existing applications and data solutions, the direction for how things will evolve is an important element of data strategy to consider, particularly in a multi-cloud environment (something I’ll address in the final section).
As discussed in The Intelligent Enterprise, I believe that the eventual direction for AI (as we already see somewhat evidenced with Copilot in Microsoft Office 365), is to move from separate agents and data apps (“intelligent agents”) to having those capabilities integrated into the workflow of applications themselves (making them “intelligent applications”), where they can create the most overall value.
What this suggests to me is that transaction data from applications will make its way into models, and be exposed back into consuming applications via AI services. Whether the data ultimately moves into a common repository that can handle both the graph and relational data within the same data solution remains to be seen, but having personally developed an integrated object- and relational-database for a commercial software package thirty years ago at the start of my career, I can foresee that there may be benefits in thinking through the value of that kind of solution.
Where things get more complicated on an enterprise level is when you scale these concepts out. I will address the end user and multi-cloud aspects of this in the next two sections, but it’s critically important in data strategy to consider how too many point solutions in the AI domain could significantly increase cost and complexity (not to mention have negative quality consequences). As data sets and insights are meant to extend outside an individual application to cross-application and cross-ecosystem levels, the ways in which that data is stored, accessed, and exposed likely will become significant. My article on Perspective on Impact-Driven Analytics attempted to establish a layered approach to how to think about the data landscape, from production to consumption, that may provide a starting point for evaluating alternatives in this regard.
Takeaways
While AI provides new technology capabilities, business ownership / stewardship of data and the processes surrounding data quality, data management, and data governance are extremely critical in an AI-enabled world
As AI capabilities move within applications, the need to look across applications for additional insights and optimization opportunities will emerge. To the extent that can be designed and architected in a consistent approach, it will be significantly more cost-effective and create more value over time at an enterprise level
Experimentation is appropriate in the AI domain for the foreseeable future, but it is important to consider how these capabilities will ultimately become integrated with the application and data ecosystems in medium to larger organizations in the interest of getting the most long-term value from the investments
Simplifying “Intelligence”
Avoiding the Pitfalls of Introducing New Technologies to End Users
The diagram above is meant to help conceptualize what could ultimately occur if AI capabilities are introduced as various data apps or intelligent agents, running separate from applications versus becoming an integrated part of the way intelligent applications behave over time.
At an overall level, expecting users to arbitrate new capabilities without integrating them thoughtfully into the workflow and footprint that exists creates the conditions for significant change management and productivity issues. This is always true when introducing change, but the expectations associated with the disruptive potential of AI (at the moment) are quite high, and that could set the stage for disappointment if there isn’t a thoughtful design in place for how the solutions are meant to make the consumer more effective on a workflow and task level.
Takeaways
Intelligence capabilities will move inside applications rather than be adjacent to them, providing more of a “guided path” approach to end users
To the degree that “micro-models” are eventually in place, that could include making the presentation layer of applications personalized to the individual user based on their profile, experience level, role, and operating conditions
The role of “Intelligent Agents” will take on a higher-level, cross-application focus, which could be (as an example) optimizing notifications and alerts coming from various applications to a more thoughtful set of prioritized actions intended to maximize individual performance
Thinking About Multi-Cloud
Working Across Environments
With the introduction of AI capabilities at an enterprise level, the challenge becomes how to leverage and integrate these technologies, particularly given that data may exist across a number of hosted and cloud-based environments. For simplicity’s sake, I’m going to assume that any cloud capability required for data management and AI services can be extended to the edge (via containers), though that may not be fully true today.
At an overall level, as it becomes desirable to extend models to include data resident both in something like Microsoft Office 365 (running on Azure) and corporate transactional data (largely running in AWS if you look at market share today), the considerations and costs for moving data between platforms could be significant if not architected in a purposeful manner.
To that end, my suggestion is to look at business needs in one of three ways:
Those that can be addressed via a single cloud platform, in which case it would likely be appropriate to design and deliver solutions leveraging the AI capabilities available natively on that platform
To the extent a solution extends across multiple providers, it may be possible to look at layering the solutions such that each cloud platform performs a subset of the analysis, resulting in pre-processed data that could then be published to a centralized, enterprise cloud environment where the various data sets are pulled into a single enterprise model that is used to address the overall need
If a partitioning approach isn’t possible, then some level of cost, capability, and performance analysis would likely make sense to determine where data should reside to enable the necessary integrated models to be developed
Again, the point is to step back from individual solutions and projects to consider the enterprise strategy for how data will be managed, models will be developed, and deployed overall. The alternative approach of deploying too many point solutions could lead to considerable cost and complexity (i.e., technical debt) over time.
Takeaways
AI capabilities are already available on all the major cloud platforms. I believe they will reach relative parity from a capability standpoint in the foreseeable future, to the point that they shouldn’t be a primary consideration in how data and models are managed and deployed
The more the environment can be designed with standards in mind, modularity, integration, interoperability, and a level of composability, the better. Technology solutions will continue to be introduced that an organization will want to leverage without having to abandon or migrate everything that is already in place
It is extremely probably that AI models will be deployed across cloud platforms, so having a deliberate strategy for how to manage and facilitate this should be given consideration
A lack of overall multi-cloud strategy will likely create complexity and cost that may be difficult to unwind over time
Wrapping Up
If you’ve made it this far, thank you for taking the time, hopefully some of the concepts were thought provoking. In Excellence by Design, I talk about ‘Relentless Innovation’…
Admittedly, there is so much movement in this space, that it’s very possible some of what I’ve written is obsolete, obvious, far-fetched, or some combination of all of the above, but that’s also part of the point of sharing the ideas: to encourage the dialogue. My experience in technology over the last thirty-two years, especially with emerging capabilities like artificial intelligence, is that we can lose perspective on value creation in the rush to adopt something new and the tool becomes a proverbial hammer in search of a nail.
What would be far better is to envision a desired end state, identify what we’d really like to be able to do from a business capability standpoint, and then endeavor to make that happen with advanced technology. I do believe there is significant power in these capabilities for the organizations that leverage them effectively.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.