Why Your CEO Shouldn’t Be Taking AI Advice From McKinsey And Twitter
This is easily the worst 1000+ words on monetizing AI written by a major consulting house. This sounds like the writers used 3 AI influencers’ Tweets to prompt ChatGPT, and this post was the result. It’s all fun and games until generative AI becomes a distraction for C-level leaders, and we’ve hit that threshold.
I was talking with an SAP sales rep on the flight home from Sapphire, and he gave the best analogy for how business leaders should think about AI. He said, “The business wants the house. They shouldn’t care about the plumbing.” It’s an excellent way to frame the conversation around data, analytics, and AI.
It’s great for C-level leaders to be interested enough to sit in on the kitchen design or explain their vision for each room. We don’t want them to be dragged into the wiring and piping decisions. If C-level leaders get too low level, they lose sight of value.
Articles like McKinsey’s are pandering to the AI hype cycle. It provides fluffy content in place of serious discussions about use cases and monetization. This is a blueprint for breaking articles like this one down for business leaders. There are two reasons to spend the time on it.
First, C-level leaders need context, and the data team is one of the few that can provide it.
Second, C-level leaders should get used to coming to the data team for context on all things data, analytics, and AI.
In this post, I’ll explain how to break down ridiculous articles like this one so business leaders aren’t easily taken in by the next one. I will also cover redirecting leaders to important points. Finally, I will discuss framing the data team as a partner and information broker.
Breaking The Post Down
“Its out-of-the-box accessibility makes generative AI different from all AI that came before it. Users don’t need a degree in machine learning to interact with or derive value from it.”
Users don’t need a degree to interact with their cell phone’s camera. Alexa also works out of the box. Pretending that generative AI tools are the first time this has ever happened is missing the point. The implementation is straightforward, and the user experience is unobtrusive. Every data and model-supported product with side adoption has followed that implementation paradigm.
“generative AI can sometimes provide less accurate results”
Downplaying the risks is another bad take. Generative AI often provides completely inaccurate results. Telling CEOs that there are a few small problems here and there makes the technology sound more capable than it is.
“Imagine a customer sales call. A specially trained AI model could suggest upselling opportunities to a salesperson...A generative AI tool might suggest upselling opportunities to the salesperson in real time based on the actual content of the conversation, drawing from internal customer data, external market trends, and social media influencer data.”
No, a generative model won’t perform well in this scenario. Logistically, this would involve real-time transcription and dozens of calls to the model during the conversation. External market and social media influencer data don’t personalize to individuals. The less relevant the prompt data for a generative model, the worse its recommendations become.
“A fraud-detection analyst can input transaction descriptions and customer documents into a generative AI tool and ask it to identify fraudulent transactions.”
No, they can’t. This use case isn’t feasible, even with retraining. Fraud is a complex use case that requires high reliability. Ask anyone who has built a fraud detection model about the potential implications of hallucination.
“A customer-care manager can use generative AI to categorize audio files of customer calls based on caller satisfaction levels.”
Generative AI models can provide a high-level sentiment estimation, but the reliability problem rears its ugly head again. The model would be reliable enough to be a novelty but not enough to be actionable.
“A software developer can prompt generative AI to create entire lines of code or suggest ways to complete partial lines of existing code.”
Just weeks after Samsung provides a cautionary tale of what happens when businesses send proprietary code to third-party tools, McKinsey goes all in on making the same mistake. This only works if there are guarantees in place stating that the business’s data won’t be used for model training. Today, few publicly available tools have provided complete transparency into how they use customer data.
“Given the versatility of a foundation model, companies can use the same one to implement multiple business use cases, something rarely achieved using earlier deep learning models.”
The section explaining how generative AI differs from other kinds of AI is riddled with inaccuracies. Most overstate the accuracy as this statement does. Generative models still need retraining to be reliable enough for enterprise use cases.
Throughout McKinsey’s article, they confuse consumer and enterprise use cases. Consumers are far more forgiving when it comes to hallucination than businesses can afford to be. At multiple points in the article, McKinsey makes a promise, then walks it back in a later paragraph. They speak in hyperbole, then advise caution.
“The bank decided to build a solution that accesses a foundation model through an API. The solution scans documents and can quickly provide synthesized answers to questions posed by RMs (Relationship Managers).”
Again, we encounter the hallucination challenge. Even with scanning publicly available data sources into the model, there’s no guarantee that hallucinations won’t occur. There are ways to prevent hallucination in this use case, but the complexity is far higher than the article discloses.
“Organizations are reimagining the target state enabled by generative AI working in sync with other traditional AI applications, along with new ways of working that may not have been possible before.”
“Companies that have not yet found ways to effectively harmonize and provide ready access to their data…”
The article is riddled with these types of buzzy sentences that are 100% meaningless. I don’t have a data harmonization tool in my toolkit. No data engineer has ever proposed a data harmonization initiative.
SAP used “data harmonization” in their conference last week, and it was one of the few points that grated my ears. It’s a sign of how powerful the legs on these buzzwords are, and that’s the danger. CEOs shouldn’t worry about plumbing or data harmonization. Just tell us where you want the water or data to flow, and the data team will take it from there.
The AI conversation has enough hype, and these 0 substance statements perpetuate the cycle. Articles like this one feel like deep dives because of their length and topic span. However, none of the content is actionable. That quickly becomes our problem.
“New models and applications are being developed and released rapidly.”
Business leaders don’t need to worry about the pace of model improvement and iteration. That should not be the driver for adoption or even entering the conversation. Opportunities and competitive threats should be the drivers.