Welcome to all the new subscribers this week who come from Oracle, IDC, Astra-Zeneca, Capgemini, Dell, BHP, and many more of the largest companies in the world. The community is closing on 16,000 subscribers and followers. It’s great to have such a diverse group of readers. This article is free for everyone. Enjoy.
The AI Paradox: AI makes businesses more reliant on people than ever. A growing number of companies are being forced to answer a tough question: “What can your products do that my $30 subscription to ChatGPT or Gemini can’t?” As I explained on LinkedIn, human is premium. The businesses that thrive for the next 5 years will find ways to differentiate, with people driving some of the highest value competitive advantages.
After the first hour, most of the comments were AI-generated summaries of my post…a post about the value proposition of human insights. It was a moment of Zen where I realized that most people are completely unprepared for what’s next. Some are digging a hole that they’ll struggle to escape from.
I didn’t say this on LinkedIn, but people are also being forced to answer the same question businesses are. If a company can get all the same value out of a $30 ChatGPT subscription as we do from an employee’s work, what does that mean for that employee’s future job prospects?
The Degradation Of Competence That Comes With AI
My first data science initiative was for a large manufacturing client in 2012. We built a model that automated finding new suppliers for hard-to-source parts. It was a dramatic success and led to a second initiative delivering the same model for a new client. The second time, I integrated logging to see how the automation changed the workflow, and the results were stunning.
After 6 months, people in sourcing and procurement stopped doing supplier discovery manually. They let the model handle everything, which sounds good, but it wasn’t. When the model returned no new results for a search, they took it at face value. We had them audit those results and found that, in most cases, a specialist could find a new supplier that the model missed.
What’s worse, we heard a variation of the same thing from multiple specialists: “I haven’t done this in so long, I’m out of practice.” It took them longer to complete the manual search and discovery process. Many said, “It took a few searches before I got back into the swing of things.”
The audit found that the model surfaced a better supplier than specialists did just over 70% of the time. However, the rest of the time, people still delivered the best outcomes. We did a significant amount of rework to notify people of low-confidence results and keep them engaged in every search.
The ROI of people working with models was higher than the ROI of the model or person alone. That’s when my Human-Machine Collaboration framework and design patterns began forming. I have had a front row seat to human-in-the-loop AI for over a decade. I have seen the value and improvements. I’m not denying the benefits or suggesting we should stop using AI.
But people must be more intentional about their adoption, usage, and limitations when it comes to AI. AI can make users feel more competent and capable when, in reality, it’s degrading both their competence and capability. AI is also good at convincing us that it’s better in every way, even when it’s clearly not.
Making Ourselves Look Replaceable
We should all be adopting AI into our workflows, but there is another behavior we don’t spend nearly enough time emphasizing. We must also be figuring out what we are capable of that AI isn’t and doubling down on those capabilities. The sourcing and procurement specialists had more time to dedicate to their toughest searches, so once they reengaged, they found better suppliers and created more value.
The people letting AI write all their comments are leaning into AI (which is good), but not developing capabilities that ensure their future career prospects. Every AI-generated comment was intended to bring more views to the human behind it. Comments on my posts can get thousands of impressions. Insightful comments can attract hundreds of new followers per day. People read the comment and think, ‘This is someone insightful who I want to follow.’
AI-generated slop brings just as many viewers, but leaves them with a very different impression. ‘This person doesn’t have original ideas and isn’t worth following.’ As more people turn to AI-slop commenting, the value of insightful commenters rises. The backlash against AI slop and the negative perception for those who do it also grow over time.
In the supplier discovery case, it would have played out that way if we hadn’t fixed the initial design. Specialist skills would have degraded, so the few people who retained their search skills would have become more valuable. Automation would make the specialists who leaned into the model the most look like they weren’t creating much value. They would have become replaceable.
The result? A few people reap massive benefits while most people get laid off. That’s the danger of making ourselves replaceable and the value of doubling down on what we can do that AI can’t.
Short-Term Gains & Long-Term Consequences
The immediate productivity gains and ability to scale our efforts are the first benefits we feel from AI. It’s only natural for those to be the ones we lean into the most as well. By following what seems to be the path of highest value, people will end up making themselves far too easy to replace with AI.
In my LinkedIn post, I raised the challenges that ChatGPT and Gemini’s deep research present to Gartner’s business model. Gartner’s current value proposition is taking in a firehose of information and aggregating it into summaries that explain key concepts about technology trends. Deep research has access to the firehose of publicly available information, and AI is excellent at distilling it down into summaries that explain key concepts.
What’s Gartner’s answer to, ‘What can you do for me that a $30 AI subscription can’t?’ First, access to information sources that aren’t public, which means developing human sources. Second, insights that aren’t obvious to AI. Insights that only an expert can surface using the vertical depth of domain expertise that most AI lacks.
For people working at the major analyst firms, developing human sources of information and deepening their domain expertise are critical for career longevity. Leaning into AI tools that allow them to churn out more reports will only deliver short-term gains at the expense of their careers.
The Tough Road Ahead
The reality is that most people won’t hear this message. As big as this community has grown, it’s still a tiny sliver of the internet content machine. Most people will only hear the conventional wisdom: ‘Lean into AI, and everything will be fine.’ Unfortunately, that will deliver short-term wins. Even when leaning into AI to be more efficient starts to fail them, most people will double down because they don’t understand why it stopped working.
At the same time, people who understand both phases (lean into AI, then differentiate on human capabilities AI can’t replicate) will thrive. By the time the ‘lean into AI’ group realizes what’s going on, their capabilities will be so deeply degraded that the path back will be a tough one. We will see this two-sided talent economy develop in every knowledge worker domain, from software engineering to marketing.
It's essential to use AI intentionally and not lean all the way into productivity gains alone. Reinvent the way you work with both goals in mind and spread the message to the people in your communities.
Throughout my career, I've fine-tuned my data detective skills, being the go-to person for the needle-in-a-haystack cases. It's not just skill, it's an art form.
It requires to willingness to get into the bowels of data (this is my own quote from my Golden toilet post) and do the janitorial work - there's so much to discover at that low-level!
It also requires to have the birds-eye view and understand how the various components in the architecture work, where bottlenecks happen and why - which can be detected only if there are tracking systems in place.
Put a message in a bottle with a GPS to find out how it travels - that's tracking data through a system.
We had an interesting discussion of MCP and designing A2A systems vs APIs in one of Santiago Valdarrama's courses.
We were talking about the difference between RPA -- automation from the UI/human interface side of apps vs Agentic Systems which can skip the apps engineered for humans and communicate directly with each other.
The current world is trying to retrofit Agents into it alongside humans and the domaniate narrative is we still need humans in the loop for my things, but as people lose these core skills or systems are designed first or Exclusively for Agents rather than humans how will we build and maintain enough comprotency to stay relevant.
Think of self-driving cars being forced to use visual cues designed for humans like street signs and painted road stripes vs digital sensors and QR codes.
When do we reach a tipping point where the roman alphabet on metal poles is as obsolete as phonebooths?