Big Bang AI never works…so what does? I have taught an incremental approach to delivering data and AI products for years because it’s the fastest way to create customer value, generate growth, and drive adoption. Incremental sounds slow, but we’re watching case study after case study on why the Big Bang approach to AI ends up being slower. Adoption stalls when solutions are delivered all at once, but it’s hard to see the root cause of each problem when the cause is so ingrained in our product thinking.
I will use Microsoft as the case study in this article, but I could just as easily use Apple, Salesforce, or several others. I will explain the problems Copilot faces and why many of those problems were created by deciding to go big rather than going incrementally.
Why Isn’t Microsoft Dominating The Enterprise AI Market?
Microsoft’s Copilot launched in 2021 with GitHub Copilot for coding and has expanded across Microsoft’s product lines, from software development tools to Office apps and business solutions. Copilot adoption has accelerated, driven by the promise of productivity gains.
Microsoft offers several Copilots:
GitHub Copilot
Microsoft 365 Copilot
Dynamics 365 Copilot
Power Platform Copilot
Security Copilot
Windows Copilot
Each Copilot variant serves a different user base, but all share the goal of augmenting human work with generative AI. Enterprise interest in generative AI tools like Copilot exploded after 2022’s ChatGPT moment. Surveys indicate that nearly all large organizations are exploring or planning to adopt these tools. A mid-2024 CIO survey by Morgan Stanley found that 94% of CIOs expect to adopt one or more of Microsoft’s generative AI offerings in the next 12 months, with Microsoft 365 Copilot being the favored solution.
Microsoft 365 Copilot was rolled out cautiously in an invitation-only early access program in early 2023. By September 2023, tens of thousands of enterprise users had tried Microsoft 365 Copilot, with prominent early adopters Visa, General Motors, KPMG, and Lumen Technologies. These pilots generated marketing buzz and provided Microsoft with feedback to refine Copilot before general release. That’s a smart go-to-market strategy, so why isn’t Microsoft seeing significant revenue growth from its Copilot line?
Early data suggests that GitHub’s Copilot delivers tangible productivity improvements. GitHub’s studies, including a large trial at Accenture, found that using Copilot can help developers code up to 55% faster on certain tasks and makes 85–95% of developers feel more confident or satisfied with their work. Internal metrics from GitHub indicate that developers accept around 30% of Copilot’s code suggestions and that Copilot users submit more frequent pull requests, indicating faster iteration.
For Microsoft 365 Copilot, ROI isn’t as well-quantified. Microsoft commissioned Forrester to project benefits for businesses (a study for SMBs estimated Copilot could reduce operating costs by ~20% and increase revenue by ~6% via faster time-to-market). That’s where the trouble begins and one reason I teach AI ROI calculation methods. Vague promises and unsupported estimates are not enough to drive enterprise adoption.
The setup for success and enterprise interest are there, but the fact remains…Microsoft isn’t seeing it translate into significant revenue growth. Despite the excitement, companies have encountered several challenges that slow down or limit generative AI product adoption. The difference between GitHub Copilot and Microsoft 365 Copilot provides clues about what’s happening here.
Key Barriers To Adoption – Cost & ROI Calculation
The pricing of Copilot, especially Microsoft 365 Copilot, is a significant barrier for many enterprises. At $30 per user per month on top of existing Microsoft 365 licensing, the cost can be substantial at scale. For a large firm with thousands of employees, Copilot creates millions in new annual software spending. CIOs and CFOs have balked at the price, questioning if the ROI justifies the expense.
Microsoft 365 Copilot is a Big Bang AI solution designed to do many things for every part of the business. Early trials have produced mixed opinions on how well it meets that broad range of needs. The value delivered must clearly outweigh that $30/user cost; in some cases, use cases are so broad and convoluted that it hasn’t.
GitHub Copilot’s cost also adds up. However, it’s seen as more directly tied to developer productivity, making it an easier sell. The narrower, more targeted use case makes the ROI easier to pin down. An obvious value proposition is the clear differentiator between the two Copilots.
Organizations will delay full deployment or scaling AI until they can prove the ROI in pilot programs. Cost sensitivity is especially acute in sectors with tight margins or budget constraints. Overall, pricing remains one of the top concerns tempering Copilot’s otherwise rapid adoption.
CIOs are having difficulty measuring Copilot 365’s exact impact. While anecdotes and small studies show efficiency gains, enterprises want hard data on productivity improvement, error reduction, or financial return, which can be tricky to isolate. Some leaders are in a “wait and see” mode, not yet committing enterprise-wide because they lack concrete proof of value beyond initial anecdotes.
I and many others advising enterprises about AI have seen countless companies stuck in pilot purgatory, rolling Copilots out to a subset of users but hesitating to go broader until they can quantify the benefits and ensure it’s the right solution. Demonstrating ROI for broad categories of knowledge work improvements is treated subjectively, leading to a library of anecdotes but not quantitative measures. That makes some executives nervous about green-lighting a costly tool without clear metrics.
SAP tried to address this by building an ROI calculator and migration tracker. Microsoft’s Copilot dashboard also tracks usage and outcomes. However, full adoption stalls until an organization can answer “How is Copilot materially improving our KPIs?”. Essentially, a lack of quantified impact creates internal resistance from budget holders and skeptics.
Integration & Data Preparedness
Successful Copilot (and this is true for most AI platforms) use depends on integrating it with the company’s data and workflows, which is non-trivial. For Microsoft 365 Copilot to be truly useful, a company’s internal knowledge (documents, intranet content, SharePoint, etc.) should be accessible and well-organized via the Microsoft Graph. Organizations with siloed or messy data find that Copilot gives generic, less helpful answers, hurting confidence in the tool.
There is an upfront burden to get data “Copilot-ready,” ensuring content is stored in the Microsoft ecosystem, labeled appropriately, and permissions are configured so Copilot can retrieve relevant information. With Big Bang AI, applications are broad, so the business must get ALL ITS DATA in order before seeing the expected returns. It must update workflows and processes to integrate AI across the business simultaneously. Smaller, more targeted implementations need much less overhead to deliver.
Integration challenges slow deployments as IT teams work through compliance checks and technical setup. If Copilot is not seamlessly embedded in the user’s workflow and armed with the proper data context, it flops, so this prep work is essential but time-consuming. The need for strong data governance around Copilot has proven to be a hurdle for less digitally mature organizations. Smaller implementations require fewer changes to workflows and are easier for users to adopt.
I see this in more than just Copilot, and I’m not singling Microsoft out as being unique with any of these challenges. However, few AI product vendors do a good job of disclosing all the upfront work required to get their solutions to function as well as they do in the demos and case studies. Solutions work reliably under perfect conditions, but integrating with other platforms and vendors creates a less-than-perfect setup.
Starting small and building incrementally spreads the work out. The business sees value quickly with the first implementation. Upfront work scales in line with each use case instead of being a massive block that must be completed at once.
Change Management & Long Learning Curves
Rolling out Copilot requires cultural and workflow changes that can be challenging. We’ve had multiple clients bring us in to advise on just the workflow transformation elements of AI adoption. It can feel trivial compared with the other challenges, but I would argue it’s the most complex.
Employees must be trained on effectively using Copilot: writing good prompts, verifying AI outputs, understanding limitations, etc. Workers resist and distrust the AI if they fear it could replace parts of their jobs. Others misuse it initially. Over-relying on AI tools without verification leads to mistakes that erode trust in the tool. Enterprises must invest in user education and set clear usage policies.
Organizations that neglect this have seen lower success. Those that do it well achieve much higher utilization and benefit. Managing this change (convincing employees to adopt a Copilot mindset, adjusting workflows to include AI, and continuously improving the human-AI collaboration) is non-trivial. Companies cite this as a hurdle, especially in less tech-savvy departments. Without buy-in and skill-building, AI tools are underused.
Smaller, more targeted AI tools work the same way here as they do with integration and data preparedness. Targeting specific workflows means targeted, focused training. Workflow changes can be planned and communicated in advance vs. reacted to after the fact. I teach intentional technology use because it focuses on applying technology to improve a workflow vs. improving an organization or domain. That also helps clarify the ROI estimation.
Competition, Choice, & Change Overload
The rapid proliferation of AI tools has also introduced a dilemma: Which Copilot or AI assistant should the business bet on? When you start big, the pressure’s on to pick the right tool upfront across the enterprise. Starting small, like GitHub Copilot, removes a lot of that pressure. The narrow use case doesn’t turn the decision into a make-or-break.
Some organizations have taken a pause because there are multiple competing solutions. The abundance of AI options has led to “decision paralysis” for leaders, who remain non-committal while waiting to see which solution proves best across a broad set of use cases. No one wants to invest heavily in one platform and then find out another was superior or more cost-effective.
When companies fail to provide an official solution, people choose their own, and shadow usage complicates the picture. The competitive landscape of Big Bang AI slows the decision to adopt any tool until the market shakes out. The result is tool sprawl, which inhibits rolling out a tool once the decision is finally made. Starting small accelerates adoption and prevents decision paralysis and shadow AI sprawl.
The Common Denominators
I use an incremental, small solution go-to-market with clients and teach it in my courses. Starting with a narrow domain like sales or finance and targeting a specific workflow has several advantages.
The product doesn’t need to be everything to everyone, so buying decision times are shorter. The initial user group is smaller, requiring fewer licenses, lowering startup costs, and accelerating the buying decision time.
Narrow use cases target one or two workflows so the integration overhead and learning curves are lower. Customer feedback is more specific because the product’s utility is straightforward.
Smaller solutions take less time to build and can be brought to market faster. That accelerates market feedback and the iterative improvement cycles. Learning lessons on a small scale means fewer expensive mistakes as the product and customer base scale up.
Once it’s adopted, new modules targeting new workflows can be easily added. Customers adopt new modules on their timelines instead of feeling forced to do everything at once.
Narrow solutions compete against fewer alternatives and are perceived as better since “this is their niche,” even as that niche expands.
There’s a new challenge for an agentic platform built incrementally: alignment. Features must align with a workflow to develop a product. Products must align with a platform to enable incremental solutions to be built at a lower cost and achieve the grander vision or Big Bang version.
Superb!