What It Is: Architecture provides the structure, standards, and framework for how technology strategy should be manifested and governed, including its alignment to business strategy, capabilities, and priorities. It should ideally be aligned at every level, from the enterprise to individual delivery projects
Why It Matters: Technology is a significant enabler and competitive differentiator in most organizations. Architecture provides a mental model and structure to ensure that technology delivery is aligned to overall strategies that create value. Having a lack of architecture discipline is like building a house without a blueprint… it will cost more to build, not be structurally sound, and expensive to maintain
Key Dimensions
Operating Model – How you organize around and enable the capability
Enable Innovation – How you allow for rapid experimentation and introduction of capabilities
Accelerating Outcomes – How you promote speed-to-market through structured delivery
Optimizing Value/Cost – How you manage complexity, minimize waste, and modernize
Inspiring Practitioners – How you identify skills, motivate, enable, and retain a diverse workforce
Performing in Production – How you promote ongoing reliability, performance, and security
Operating Model
Design the model to provide both enterprise oversight and integrated support with delivery
Ensure there is two-way collaboration, so delivery informs strategy and standards and vice versa
Foster a “healthy” tension between doing things “rapidly” and doing things “right”
Innovate
Identify and thoughtfully evaluate emerging technologies that can provide new capabilities that promote competitive advantage
Engage in experimentation to ensure new capabilities can be productionized and scaled
Accelerate
Develop standards, promote reuse, avoid silos, and reduce complexity to enable rapid delivery
Ensure governance processes promote engagement while not gating or limiting delivery efficacy
Optimize
Identify approaches to enable simplification and modernization to promote cost efficiency
Support benchmarking where appropriate to ensure cost and quality of service is competitive
Inspire
Inform talent strategy ensuring the right skills to support ongoing innovation and modernization
Provide growth paths to enable fair and thoughtful movement across roles in the organization
Perform
Integrate cyber security throughout delivery processes and ensure integrated monitoring for reliability and performance in production, across hosted and cloud-based environments
What It Is: Companies lean on consulting organizations to bring capabilities to bear in the interest of solving problems. GenAI has caused disruption in how organizations are thinking about that relationship
Why It Matters: Organizations want to spend money effectively in the interest of fostering innovation, creating competitive advantage, and generating value. The relatively low cost and speed with which one can write prompts and obtain rapid, well-formed responses challenges many aspects of the time and cost involved in traditional consulting efforts, which is the focus of this article
Where GenAI is Helpful
It’s important to acknowledge that leveraging AI can be a relatively easy and highly effective way to quickly surface content that is informative and pertinent to a given subject or request
Some Additional Considerations, however… in favor of Consultants
If every complex business problem was as simple as reading a book, that book would’ve been written (many have, in fact), everyone would read it, and achieved peak effectiveness already. Except, that’s not the case. We don’t operate in a perfect world of theoretical norms. Real life involves context and specifics, including the dynamics, idiosyncrasies, and people that are part of organizations. Part of the role consultants play is to translate and apply conceptual ideals to real-world environments, and that’s one of the ways that they create value for their clients
Part of what you buy with consulting is also institutional knowledge that comes from experience working in a given industry, access to other organizations and the ways they operate, etc., that is not “publicly available”. With the advent of AI, there will be a rise in proprietary data providers that cover gaps in what is available in the public domain, but if you are contracting to access those additional insights, you’re essentially still paying someone for outside perspective
Business challenges are complex, have multiple dimensions, and your priorities play a significant role in what you ultimately achieve. Part of the value consultants provide is not just providing a laundry list of things to do or concepts to consider, but a thoughtful list of priorities and way to navigate complex situations with your organization’s realities and your environment in mind
Another reason organizations look to consultants is to help foster and promote innovation. Innovation, by definition, is not “best practice”. It exists in the white space between what is and what could be, and that is not something that can be harvested from a knowledge base, no matter how quickly you can access it, or how well-formed it appears on a screen. It is a creative act in itself, and having someone help facilitate that process can create substantial value
To the extent that a consultancy is providing experience on something that is outside of an organization’s core competency, understanding the quality of what comes from an AI-only solution could be difficult to impossible. If that inquiry is critical to your business, the next question is whether you want to place a bet without understanding whether the answer is based in fact or potential “hallucinations” and the degree to which it is comprehensive at all
While it is not always the nature of consulting, having an unbiased perspective can be extremely valuable. The result of an AI inquiry can be based on the nature of the prompt that was written and what the tool knows about that requestor themself. Strategy consultants are meant to eliminate a level of confirmation bias that could limit the potential of what is possible
Finally, while we may trust technology to varying degrees in every day life, there is something to be said with the level of trust it takes to rely on it without a level of objective and authoritative support. Prompting can lead to continually refined results, but there is also value and a benefit to having conversation, a true understanding of needs, and the comfort that comes with actual human interaction and discourse and a solution that is right for “you”.
What It Is: With the advent of AI, the question ishow to integrate it effectively at an enterprise level. The long-term view should be a synthesis of applications, AI, and data, working in harmony, providing integrated capabilities that maximize effectiveness and productivity for the end users of technology
Why It Matters: Much like the .com era, there are lofty expectations of what AI can deliver without a fundamental strategy for how those capabilities will be integrated and leveraged at scale. Selecting the right approach that balances tactical gains with strategic infrastructure will be critical to optimizing and delivering differentiated value rapidly and consistently in a highly competitive business environment
Key Concepts
AI is a capability, not an end in itself. User-centered design is more important than ever
Resist the temptation to treat AI as a one-off and integrate it with existing portfolio processes
The end goal is to expose and harness all of an organization’s capabilities in a consistent way
Agentic solutions will become much more mainstream, along with orchestration of processes
The more agentic solutions become standard, the less application-specific front ends are needed
Natural language input will become common to reduce manual entry in various processes
We will shift from content via LLMs to optimizing processes and transactions via causal models
AI should help personalize solutions, reduce complexity, and improve productivity
Only a limited number of sidecar applications can be deployed before overwhelming end users
The less standardized the environment is, the longer it will take to achieve enterprise AI benefits
As with any transformation, don’t try to boil the ocean, have a strategy and migrate over time
Approach
Ensure architecture governance is in place quickly to avoid accruing significant technical debt
Design towards an enterprise architecture framework to enable rapid scaling and deployment
Migrate towards domain-based ecosystems to facilitate evolution and rapid scaling of capability
Enable rapid, disciplined, and governed experiments to explore tools and solution approaches
Place heavy emphasis on integration standards as a means to deploy new AI services with speed
Develop a conceptual “template” for how AI capabilities will be integrated to facilitate reuse
Organize AI services into insights (inform), agents (assist), and experts (benchmark, train, act)
Separate internal from package-provided AI services to provide agility and manage overall costs
Evaluate internal and external solutions by their ability to integrate services and enable agents
Reinforce data management and data governance processes to enable quality insights
Define roles and expectations for those in the organization who develop, use, and manage AI
I remember a situation where an organization decided to aggressively offshore work to reduce costs. The direction to IT leaders was no different than that of the Gold Rush above. Leaders were given a mandate, a partner with whom to work, and a massive amount of contracting ensued. The result? A significant number of very small staff augmentation agreements (e.g., 1-3 FTEs), a reduction in fixed, but a disproportionate increase in variable operating expenses, and a governance and administrative nightmare. How did it happen? Well, there was leadership support, a vision, and a desired benefit, but no commonly understood approach, plan, or governance. The organization then spent a considerable amount of time transitioning all of the agreements in place to something more deliberate, establishing governance, and optimizing what quickly became a very expensive endeavor.
The requirements of transformation are no different today than they ever have been. You need a vision, but also the conditions to promote success, and that includes an enabling culture, a clear approach, and governance to keep things on track and make adjustments where needed.
This is the final post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”. The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.
Overall Approach
Writing about IT operating models can be very complex for various reasons, and mostly because there are many ways to structure IT work based on the size and complexity of an organization,and there is nothing wrong with that in principle. A small- to medium-size IT organization, as an example, likely has little separation of concerns and hierarchy, people playing multiple roles, a manageable project portfolio, and minimal coordination required in delivery. Standardization is not as complex and requires considerably less “design” than a multi-national or large-scale organization, where standardization and reuse needs to be weighed against the cost of coordination and management involved as the footprint scales. There can also be other considerations, such as one I experienced where simplification of a global technology footprint was driven off of operating similarities across countries rather than geographic or regional considerations.
What tends to be common across ways of structuring and organizing is the various IT functions that exist (e.g., portfolio management, enterprise architecture, app development, data & analytics, infrastructure, IT operations), just at different scales and levels of complexity. These can be capability-based, organized by business partner, with some centralized capabilities and some federated, etc. but the same essential functions likely are in place, at varying levels of maturity and scale based on the needs of the overall organization. In multi-national or multi-entity organizations/conglomerates, these functions likely will be replicated across multiple IT organizations with some or no shared capability existing at the parent/holding company or global level.
To that end, I am going to explore how I think about integrating the future state concepts described in the earlier articles of this series in terms of an approach and conceptual framework that, hopefully, can be applied to a broad range of IT organizations, regardless of their actual structure.
The critical challenge with moving from our current environment to one where AI, apps, and data are synthesized and integrated is doing it in a way that follows a consistent approach and architecture while not centralizing so much that we either constrain progress or limit innovation that can be obtained by spreading the work across a broader percentage of an organization (business teams included). This is consistent with the templatized approach discussed in the prior article on Transition, but there can be many ways that that effort is planned and executed based on the scale and complexity of the organization undertaking the transformation itself.
Key Considerations
Define the Opportunities, Establish the Framework, Define the Standards
Before being concerned with how to organize and execute, we first need to have a mental model for how teams will engage with an AI-enabled technology environment in the future. Minimally, I believe that will manifest itself in four ways:
Those who define and govern the overall framework
Those who leverage AI-enabled capabilities to do their work
Those who help create the future environment
Those who facilitate transition to the future state operating model
I will explore the individual roles in the next section, but an important first step is defining the level of standardization and reuse that is desirable from an enterprise architecture standpoint. That question becomes considerably more complex in organizations with multiple entities, particularly when there are significant differences in the markets they serve, products/services they provide, etc. That doesn’t mean, however, that reuse and optimization opportunities don’t exist, but rather that they need to be more thoughtfully defined and developed so as not to slow any individual organization down in their ability to innovate and promote competitive advantage. There may be opportunities to look for common capabilities that make more sense to develop centrally and reuse that can actually accelerate speed-to-market if built in a thoughtful manner.
Regardless of whether capabilities in a larger organization are designed and developed in a highly distributed manner, having a common approach to the overall architecture and standards (as discussed in the Framework article in this series) could be a way to facilitate learning and optimization in the future (within and across entities), which will be covered in the third portion of this article.
Clarify the Roles and Expectations, Educate Everyone
The table below is not meant to be exhaustive, but rather to act as a starting point to consider how different individuals and teams will engage with the future enterprise technology environment and highlight the opportunity to clarify those various roles and expectations in the interest of promoting efficiency and excellence.
While I’m not going to elaborate on the individual data points in the diagram itself, a few points to note in relation to the table.
There is a key assumption that AI-related tools will be vetted, approved, and governed across an enterprise (in the above case, by the Enterprise Architecture function). This is to promote consistency and effectiveness, manage risk, and consider issues related to privacy, IP-protection, compliance, security, and other critical concerns that otherwise would be difficult to manage without some formal oversight.
It is assumed that “low hanging fruit” tools like Copilot, LLMs, and other AI tools will be used to improve productivity and look for efficiency gains while implementing a broader, modern and integrated future state technology footprint with integrated agents, insights, and experts. The latter of these things has touchpoints across an organization, which is why having a defined framework, standards, and governance are so important in creating the most cost-effective and rapid path to transforming the environment to one that create disproportionate value and competitive advantage.
Finally, there are adjustments to be made in various operating aspects of running IT, which is to reinforce the idea that AI should not be a separate “thing” or a “silver bullet”, it needs to become an integrated part of the way an organization leverages and delivers technology capabilities. To the extent it is treated as something different or special and separated from the ongoing portfolio of work and operational monitoring and management processes, it will eventually fail to integrate well, increase costs, and underdeliver on value contributions that are widely being chased after today. Everyone across the organization should also be made aware of the above roles and expectations, along with how these new AI-related capabilities are being leveraged so they can help identify continuing opportunities to leverage and improve their adoption across the enterprise.
Govern, Coordinate, Collaborate, Optimize
With a model in place and organizational roles and responsibilities clarified, there needs to be a mechanism to collect learnings, facilitate improvements to the “templatized approach” referenced in the previous article in this series, and drive continuous improvement in how the organization functions and leverages these new capabilities.
This can be manifest in several approaches when spread across a medium to large organization, namely:
Teams can work in partnership or in parallel to try a new process or technology and develop learnings together
One team can take the lead to attempt a new approach or innovation and share learnings with other in a fast-follower based approach
Teams can try different approaches to the same type of solution (when there isn’t a clear best option), benchmark, and select the preferred approach based on the learning across efforts
The point is that, to achieve maximum efficiency, especially when there is scale, centralizing too much can hamper learning and innovation, and it is better to develop coordinated approaches that can be governed and leveraged than to have a “free for all” where the overall opportunity to innovate and capture efficiencies is compromised.
Summing Up
As I mentioned at the outset, the challenge in discussing IT implications from establishing a future enterprise environment with integrated AI is that there are so many possibilities for how companies can organize around it. That being said, I do believe a framework for the future intelligent enterprise should be defined, roles across the organization should be clarified, and transition to that future state should be governed with an eye towards promoting value creation, efficiency, and learning.
This concludes the series on my point of view related to the future of enterprise technology with AI, applications, and data and analytics integrated into one, aligned strategy. No doubt the concepts will evolve as we continue to learn and experiment with these capabilities, but I believe there is always room for strategy and a defined approach rather than an excess of ungoverned “experimentation”. History has taught us that there are no silver bullets, and that is the case with AI as well. We will obtain the maximum value from these new technologies when we are thoughtful and deliberate in how we integrate them with how we work and the assets and data we possess. Treating them as something separate and distinct will only suboptimize the value we create over time and those who want to promote excellence will be well served to map out their strategy sooner rather than later.
I hope the ideas across this series were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
No one builds applications or uses new technology with the intention of making things worse… and yet we have and still do at times.
Why does this occur, time and again, with technology? The latest thing was supposed to “disrupt” or “transform” everything. I read something that suggested this was it. The thing we needed to do, that a large percentage of companies were planning, that was going to represent $Y billions of spending two years from now, generating disproportionate efficiency, profitability, and so on. Two years later (if that), there was something else being discussed, a considerable number of “learnings” from the previous exercise, but the focus was no longer the same… whether that was windows applications and client/server computing, the internet, enterprise middleware, CRM, Big Data, data lakes, SaaS, PaaS, microservices, mobile applications, converged infrastructure, public cloud… the list is quite long and I’m not sure that productivity and the value/cost equation for technology investments is any better in many cases.
The belief that technology can have such a major impact and the degree of continual change involved have always made the work challenging, inspiring, and fun. That being said, the tendency to rush into the next advance without forming a thoughtful strategy or being deliberate about execution can be perilous in what it often leaves behind, which is generally frustration for the end users/consumers of those solutions and more unrealized benefits and technical debt for an organization. We have to do better with AI, bringing intelligence into the way we work, not treating it as something separate entirely. That’s when we will realize the full potential of the capabilities these technologies provide. In the case of an application portfolio, this is about our evolution to a suite of intelligent applications that fit into the connected ecosystem framework I described earlier in the series.
This is the fourth post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”. The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.
Design Dimensions
In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design. I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last. The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.
Before exploring various scenarios for how we will evolve the application landscape, it’s important to note a couple overall assumptions:
End user needs and understanding should come first, then the capabilities
Not every application needs to evolve. There should be a benefit to doing so
I believe the vast majority of product/platform providers will eventually provide AI capabilities
Providing application services doesn’t mean I have to have a “front-end” in the future
Governance is critical, especially to the extent that citizen development is encouraged
If we’re not mindful how many AI apps we deploy, we will cause confusion and productivity loss because of the fragmented experience
Purchased Software (1)
The diagram below highlights a few different scenarios for how I believe intelligence will find its way into applications.
In the case of purchased applications, between the market buzz and continuing desire for differentiation, it is extremely likely that a large share of purchased software products and platforms will have some level of “AI” included in the future, whether that is an AI/ML capability leveraging OLTP data that lives within its ecosystem, or something more causal and advanced in nature.
I believe it is important to delineate between internally generated insights and ones coming as part of a package for several reasons. First, we may not always want to include proprietary data on purchased solutions, especially to the degree they are hosted in the public cloud and we don’t want to expose our internal data to that environment from a security, privacy, or compliance standpoint. Second, we may not want to expose the rules and IP associated with our decisioning and specific business processes to the solution provider. Third, to the degree we maintain these as separate things, we create flexibility to potentially migrate to a different platform more easily than if we are tightly woven into a specific package. And, finally, the required data ingress to comingle a larger data set to expand the nature of what a package could provide “out of the box” may inflate operating costs of the platforms unnecessarily (this can definitely be the case with ERP platforms).
The overall assumption is that, rather than require custom enhancements of a base product, the goal from an architecture standpoint would be for the application to be able to consume and display information from an external AI service that is provided from your organization. This is available today within multiple ERP platforms, as an example.
The graphic below shows two different migration paths towards a future state where applications have both package and internally provided AI capabilities, one where the package provider moves first, internal capabilities are developed in parallel as a sidecar application, and then eventually fully integrated into the platform as a service, and the other way around, assuming the internal capability is developed first, run in parallel, then folded into the platform solution.
Custom-Developed Software (2)
In terms of custom software, the challenge is, first, evaluating whether there is value in introducing additional capabilities for the end user and, second, understanding the implications for trying to integrate the capabilities into the application itself versus leaving them separate.
In the event that there is uncertainty on the end user value of having the capability, implementing the insights as a side car / standalone application, then looking to integrate them within the application as an integrated capability a second step may be the best approach.
If a significant amount of redesign or modernization is required to directly integrate the capabilities, it may make sense to either evaluate market alternatives as a replacement to the internal application or to leave the insights separate entirely. Similar to purchased products, the insights should be delivered as a service and integrated into the application versus being built as an enhancement to provide greater flexibility for how they are leveraged and to simplify migrations to a different solution in the future.
The third scenario in the diagram above is meant to reflect a separate insights application that is then folded into the custom application as a service over time, so that it is a more seamless experience for the end user over time.
Either way, whether it be a purchased or custom-built solution, the important points are to decouple the insights from the applications to provide flexibility, but also to think about both providing a front-end for users to interact with the applications, but also to allow for a service-based approach as well, so that an agent acting on behalf of the user or the system itself could orchestrate various capabilities exposed from that application without the need for user intervention.
From Disconnected to Integrated Insights (3)
One of the reasons for separating out these various migration scenarios is to highlight the risk that introducing too many sidecar or special/single purpose applications could cause significant complexity if not managed and governed carefully. Insights should serve a process or need, and if the goal is to make a user more productive, effective, or safer, those capabilities should ultimately be used to create more intelligent applications that are easier to use. To that end, there likely would be value in working through a full product lifecycle when introducing new capabilities, to determine whether it is meant to be preserved, integrated with a core application (as a service), or tested and possibly decommissioned once a more integrated capability is available.
Summing Up
While the experience of a consumer of technology likely will change and (hopefully) become more intuitive and convenient with the introduction of AI and agents, the need to be thoughtful in how we develop an application architecture strategy, leverage components and services, and put the end user first will be priorities if we are going to obtain the value of these capabilities at an enterprise level. Intelligent applications is where we are headed and our ability to work with an integrated vision of the future will be critical to realizing the benefits available in that world.
The next article will focus on how we should think about the data and analytics environment in the future state.
Up Next: Deconstructing Data-Centricity
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
“If only I could find an article that focused on AI”… said no one, any time recently.
In a perfect world, I don’t want “AI” anything, I want to be able to be more efficient, effective, and competitive. I want all of my capabilities to be seamlessly folded into the way people work so they become part of the fabric of the future environment. That is why having an enterprise-level blueprint for the future is so critically important. Things should fit together seamlessly and they often don’t, especially when we don’t design with integration in mind from the start. That friction slows us down, costs us more, and makes us less productive than we should be.
This is the third post in a series focused on where I believe technology is heading, with an eye towards a more harmonious integration and synthesis of applications, AI, and data… what I previously referred to in March of 2022 as “The Intelligent Enterprise”. The sooner we begin to operate off a unified view of how to align, integrate, and leverage these oftentimes disjointed capabilities today, the faster an organization will leapfrog others in their ability to drive sustainable productivity, profitability, and competitive advantage.
Design Dimensions
In line with the blueprint above, articles 2-5 highlight key dimensions of the model in the interest of clarifying various aspects of the conceptual design. I am not planning to delve into specific packages or technologies that can be used to implement these concepts as the best way to do something always evolves in technology, while design patterns tend to last. The highlighted areas and associated numbers on the diagram correspond to the dimensions described below.
Natural Language First (1)
I don’t own an Alexa device, but I have certainly had the experience of talking to someone who does, and heard them say “Alexa, do this…”, then repeat themself, then repeat themself again, adjusting their word choice slightly, or slowing down what they said, with increasing levels of frustration, until eventually the original thing happens.
These experiences of voice-to-text and natural language processing have been anything but frictionless: quite the opposite, in fact. With the advent of large language models (LLMs), it’s likely that these kinds of interactions will become considerably easier and more accurate, along with the integration of written and spoken input being a means to initiate one or more actions from an end user standpoint.
Is there a benefit? Certainly. Take the case of a medical care provider directing calls to a centralized number for post-operative and case management follow ups. A large volume of calls needs to be processed and there are qualified medical personnel available to handle them on a prioritized basis. The technology can play the role of a silent listener, both recording key points of the conversation and recommended actions (saving time in documenting the calls), and also making contextual observations integrated with the healthcare worker’s application (providing insights) to potentially help address any needs that arise mid-discussion. The net impact could be a higher volume of calls processed due to the reduction in time documenting calls and improved quality of care from the additional insights provided to the healthcare professional. Is this artificial intelligent replacing workers? No, it is helping them be more productive and effective, by integrating into the work they are already doing, reducing the lower value add activities and allowing them to focus more on patient care.
If natural language processing can be integrated such that comprehension is highly accurate, I can foresee where a large amount of end user input could be provided this way in the future. That being said, the mechanics of a process and the associated experience still need to be evaluated so that it doesn’t become as cumbersome as some voice response mechanisms in place today can be, asking you to “say or enter” a response, then confirming what you said back to you, then asking for you to confirm that, only to repeat this kind of process multiple times. No doubt, there is a spreadsheet somewhere to indicate savings for organizations in using this kind of technology by comparison with having someone answer a phone call. The problem is that there is a very tedious and unpleasant customer experience on the other side of those savings, and that shouldn’t be the way we design our future environments.
Orchestration is King (2)
Where artificial intelligence becomes powerful is when it pivots from understanding to execution.
Submitting a natural language request, “I would like to…” or “Do the following on my behalf…”, having the underlying engine convert that request to a sequence of actions, and then ultimately executing those requests is where the power of orchestration comes in.
Back to my earlier article on The Future of IT from March of 2024, I believe we will pivot from organizations needing to create, own, and manage a large percentage of their technology footprint to largely becoming consumers of technologies produced by others, that they configure to enable their business rules and constraints and that they orchestrate to align with their business processes.
Orchestration will exist on four levels in the future:
That which is done on behalf of the end user to enable and support their work (e.g., review messages, notifications, and calendar to identify priorities for my workday)
That which is done within a given domain to coordinate transaction processing and optimize leverage of various components within a given ecosystem (e.g., new hire onboarding within an HR ecosystem or supplier onboarding within the procurement domain)
That which is done across domains to coordinate activity that spans multiple domains (e.g., optimizing production plans coming from an ERP systems to align with MES and EAM systems in Manufacturing given execution and maintenance needs)
Finally, that which is done within the data and analytics environment to minimize data movement and compute while leveraging the right services to generate a desired outcome (e.g., optimizing cost and minimizing the data footprint by comparison with more monolithic approaches)
Beyond the above, we will also see agents taking action on behalf of other, higher-level agents, where there is more of a heirarchical relationship where a process is decomposed into subtasks executed (ideally in parallel) to serve an overall need.
Each of these approaches refer back to the concept of leveraging defined ecosystems and standard integration as discussed in the previous article on the overarching framework.
What is critical is to think about this as a journey towards maturing and exposing organizational capabilities. If we assume an end user wants to initiate a set of transactions through a verbal command, that then is turned in a process to be orchestrated on their behalf, we need to be able to expose the services that are required to ultimately enable that request, whether that involves applications, intelligence, data, or some combination of the three. If we establish the underlying framework to enable this kind of orchestration, however it is initiated, through an application, an agent, or some other mechanism, we could theoretically plug new capabilities into that framework to expand our enterprise-level technology capabilities more and more over time, creating exponential opportunity to make more of our technology investments. The goal is to break down all the silos and make every capability we have accessible to be orchestrated on behalf of an end user or the organization.
I met with a business partner not that long ago who was a strong advocate for “liberating our data”. My argument would be that the future of an intelligent enterprise should be to “liberate all of our capabilities”.
Insights, Agents, and Experts (3)
Having focused on orchestration, which is a key capability within agentic solutions, I did want to come back to three roles that I believe AI can fulfill in an enterprise ecosystem of the future, they are:
Insights – observations or recommendations meant to inform a user to make them more productive, effective, or safer
Agents – applications that orchestrate one or more activities on behalf of or in concert with an end user
Experts – applications that act as a reference for learning and development and to serve as a representation of the “ideal” state either within a given domain (e.g., a Procurement “Expert” may have accumulated knowledge of both best practices, market data, and internal KPIs and goals that allow end users and applications to interact with it as an interactive knowledge base meant to help optimize performance) or across domains (i.e., extending the role of a domain-based expert to be broader to focus on enterprise-level objectives and to help calibrate the goals of individual domains to help achieve those overall outcomes more effectively)
I’m not aware of the “Expert” type capabilities existing for the most part today, but I do believe having more of an autonomous entity that can provide support, guidance, and benchmarking to help optimize performance of individuals and systems could be a compelling way to leverage AI in the future.
AI as a Service (4)
I will address how AI should be integrated into an application portfolio in the next article, but I felt it was important to clarify that I believe that, while AI is being discussed as an objective, a product, and an outcome in many cases today, it is important to think of it as a service that lives and is developed as part of a data and analytics capability. This feels like the right logical association because the insights and capabilities associated with AI are largely data-centric and heavily model dependent, and that should live separate from applications meant to express those insights and capabilities to an end user.
Where the complicating factor could arise from my experience is in how the work is approached and the capabilities of the leaders charged with AI implementation, something I will address in the seventh article in this series on organizational consideration.
Suffice is to say that I see AI as an application-oriented capability, even though it is heavily dependent on data and your underlying model. To the extent that a number of data leaders can come from a background focused on storage, optimization, and performance of traditional or even advanced analytics/data science capabilities, they may not be ideal candidates to establish the vision for AI, given it benefits from more of an outside-in (consumer-driven) mindset than an inside-out (data-focused) approach.
Summing Up
With all the attention being given to AI, the main purpose of breaking it down in the manner I have above it to try and think about how we integrate and leverage it within and across an enterprise, and most importantly: not to treat it as a silo or a one-off. That is not the right way to approach AI moving forward. It will absolutely become part of the way people work, but it is a capability like many other in technology, and it is critically important that we continue to start with the consumers of technology and how we are making them more productive, effective, safe, and so on.
The next two articles will focus on how we integrate AI into the application and data environments.
Up Next: Evolving Applications
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
In my recent article on Exploring Artificial Intelligence, I covered several dimensions of how I think about the direction of AI, including how models will evolve from general-purpose and broad-based to be more value-focused and end consumer-specific as the above diagram is intended to illustrate.
The purpose of this article is to dive a little deeper into a mental model for how I believe the technology could become more relevant and valuable in an end-user (or end consumer) specific context.
Before that, a few assertions related to the technology and end user application of the technology:
The more we can passively collect data in the interest of simplifying end-user tasks and informing models, the better. People can be both inconsistent and unreliable in how they capture data. In reality, our cell phones are collecting massive amounts of data on an ongoing basis that is used to drive targeted advertising and other capabilities to us without our involvement. In a business context, however, the concept of doing so can be met with significant privacy and other concerns and it’s a shame because, while there is data being collected on our devices regardless, we aren’t able to benefit from it in the context of doing our work
Moving from a broad- or persona-based means of delivering technology capabilities to a consumer-specific approach is a potentially significant advancement in enabling productivity and effectiveness. This would be difficult or impossible to achieve without leveraging an adaptive approach that synthesizes various technologies (personalization, customization, dynamic code generation, role-based access control, AI/ML models, LLMs, content management, and so forth) to create a more cohesive and personalized user experience
While I am largely focusing on end-user application of the technology, I would argue that the same concepts and approach could be leveraged for the next generation of intelligent devices and digital equipment, such as robotics in factory automation scenarios
To make the technology both performant and relevant, part of the design challenge is to continually reduce and refine to level of “model” information that is needed at the next layer of processing so as not to overload the end computing device (presumably a cell phone or tablet) with a volume of data that isn’t required to enable effective action on behalf of the data consumer.
The rest of this article will focus on providing a mental model for how to think about the relationship across the various kinds of models that may make up the future state of AI.
Starting with a “Real World” example
Having spent a good portion of my time off traveling across the U.S., while I had a printed road atlas in my car, I was reminded of the trust I place in Google Maps more than once, particularly when driving through an “open range” gravel road with cattle roaming about in northwest Nebraska on my way to South Dakota. In many ways, navigation software represents a good starting point for where I believe intelligent applications will eventually go in the business environment.
Maps is useful as a tool because it synthesizes what data is has on roads and navigation options with specific information like my chosen destination, location, speed traps, delays, and accident information that is specific to my potential routes, allowing for a level of customization if I prefer to take routes that avoid tolls and so on. From an end-user perspective, it provides a next recommended action, remaining contextually relevant to where I am and what I need to do, along with how long it will be both until that action needs to be taken as well as the distance remaining and time I should arrive at my final destination.
In a connected setting, navigation software pulls pieces of its overall model and applies data on where I am and where I’m going, to (ideally) help me get where I’m going as efficiently as possible. The application is useful because it is specific to me, to my destination, and to my preferred route, and is different than what would be delivered to a car immediately behind me, despite leveraging the same application and infrastructure. This is the direction I believe we need to go with intelligent applications, to drive individual productivity and effectiveness.
Introducing the “Tree of Knowledge” concept
The Overall Model
The visual above is meant to represent the relationship of general-purpose and foundational models to what ultimately are delivered to an end-user (or piece of digital equipment) in a distributed fashion.
Conceptually, I think of the relationship across data sets as if it were a tree.
The general-purpose model (e.g., LLM) provides the trunk that establishes a foundation for downstream analytics
Domain-specific models (e.g., RAG) act as the branches that rely on the base model (i.e., the trunk) to provide process- or function-specific capabilities that can span a number of end-user applications, but have specific, targeted outcomes in mind
A “micro”-model is created when specific branches of the tree are deployed to an end-user based on their profile. This represents the subset that is relevant to that data consumer given their role, permissions, experience level, etc.
The data available at the end point (e.g., mobile device) then provides the leaves that populate the branches of the “micro”-models that have been deployed to create an adaptive model used to inform the end user and drive meaningful and productive action.
The adaptive model should also take into account user preferences (via customization options) and personalization to tune their experience as closely as possible to what they need and how they work.
In this way, the progression of models moves from general to very specific, end-user focused solutions that are contextualized with real-time data much the same as the navigation example above.
It is also worth noting that, in addition to delivering these capabilities, the mobile device (or endpoint) may collect and send data back to further inform and train the knowledge models by domain (e.g., process performance data) and potentially develop additional branches based on gaps that may surface in execution.
Applying the Model
Having set context on the overall approach, there are some notable differences from how these capabilities could create a different experience and level of productivity than today, namely:
Rather than delivering content and transactional capabilities based on an end-user’s role and persona(s), those capabilities would be deployed to a user’s device (the branches of the “micro”-model), but synthesized with other information (the “leaves”) like the user’s experience level, preferences, location, training needs, equipment information (in a manufacturing-type context), to generate an interface specific to them that continually evolves to optimize their individual productivity
As new capabilities (i.e., “branches”) are developed centrally, they could be deployed to targeted users and their individual experiences would adapt to incorporate in ways that work best for them and their given configuration, without having to relearn the underlying application(s)
Going Back to Navigation
On the last point above, a parallel example would be the introduction of weather information into navigation.
At least in Google Maps, while there are real-time elements like speed traps, traffic delays, and accidents factored into the application, there is currently no mechanism to recognize or warn end users about significant weather events that also may surface along the route. In practice, where severe weather is involved, this could represent safety risk to the traveler and, in the event that the model was adapted to include a “branch” for this kind of data, one would hope that the application would behave the same from an end-user standpoint, but with the additional capability integrated into the application.
Wrapping Up
Understanding that we’re still early in the exploration of how AI will change the way we work, I believe that defining a framework for how various types of models can integrate and work across purposes would enable significant value and productivity if designed effectively.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.