I get called in to consult on one problem more than any other: AI Monetization. Most discovery calls with C-level leaders surface a similar challenge. They need to show quantifiable returns on AI investments or prove they’re making progress on their AI strategy. And they’re on the clock. CEOs know they have 12 months or less to achieve their goals.
If you’re looking for an AI strategy advisor, I have an extensive certification network filled with exceptionally talented people who can help, and I am happy to make introductions.
Investors and shareholders expect businesses to be well beyond PoC purgatory and the experimentation phase. They are challenging CEOs to quantify the size of the business’s AI opportunity and lay out a timeline for realizing returns. They expect progress to be reflected on the balance sheet.
In prior years, any business with a perceived AI benefit got a boost, but that’s changed in the last 6 months. CEOs need to accelerate AI’s time to value, creating a massive opportunity for people with the combination of technical and strategic capabilities. In this article, I’ll cover how I fill the gap and accelerate time to value for clients.
Buy VS Build Accelerates In 2 Ways
For internal use cases, operational efficiency and productivity, buy is almost always the right decision. It accelerates time to value by avoiding lengthy development cycles. Resources should instead focus on increasing adoption rates, decreasing migration timelines, and customizing the solution with internal data.
Training and upskilling are critical for adoption, so look for vendors who provide them as part of the package. Vendors should also offer architects, project managers, and technical resources. This will significantly reduce migration timelines and headaches. Partner with the vendor to minimize the effort required for customization initiatives.
I advise clients to look for 3 main capabilities:
Default Knowledge Graph Or Ontology That Can Be Augmented With The Business’s Proprietary Data
Configurable Reporting, Analytics, And Models That Work Out-Of-The-Box And Can Be Customized
Deterministic Guardrails & High Reliability For Critical Use Cases
A semi-technical development environment or agent builder utility is another important capability, but not every business needs this in the first year. It’s enough for the vendor to be working on it, which brings me to my next point.
Evaluate the vendor’s AI product and platform roadmaps, looking 1-2 years ahead to ensure they align with the business’s needs. Capabilities maturity is a feature of the business’s AI strategy. That means the business’s internal needs will advance over time, and the worst thing that can happen is to get locked into a solution that isn’t maturing in the same direction. I advise clients to look for solutions that run about a year ahead of the business.
The biggest AI opportunities are external, customer-facing products. Buying accelerates time to value by redirecting internal data and AI technical resources to those initiatives. Most businesses are slow to deliver products because their teams are tied up with internal productivity initiatives. Relying on vendors for the heavy lifting for internal use cases accelerates the delivery of customer-facing products.
That has network effects. It takes data and AI teams time to learn how to deliver customer-facing products. They need to learn how to optimize solutions to improve margins and increase reliability to meet customer expectations. The business needs to practice finding high-value use cases and monetizing solutions. The more reps it can get in during the first year, the faster value materializes and growth accelerates.
Small Models & Vertical Depth
It’s time to join the open-source fraternity of Phi-Gemma-Llama. Enterprises can accelerate AI monetization with Small Language Models (SLMs). They’re better at addressing challenges related to cost, reliability, data requirements, and strategic alignment that often hinder the monetization of larger, more complex models.
Smaller, domain-specific models are less expensive to build and maintain than massive models. They require less training data and computing resources. SLMs serve inference faster and at a lower cost per request. Lower costs mean more use cases have a positive ROI.
Smaller, domain-specific models are more reliable and explainable. Reliability and agency are crucial for AI agents, and multiple small models are winning over the one-massive-model approach. SLMs provide reliable vertical depth around a narrow use case, which is critical for adoption and scaling AI’s agency or ability to take action. Until users trust the model enough to allow it to act, the value an agent delivers is limited.
LLMs have persistent reliability issues. I advise clients to bypass these by leveraging small, explainable, high-reliability models built using simpler approaches and smaller datasets. SLMs’ unit economics work for a wide range of use cases, where LLM costs scale faster than returns.
SLMs are easier to deploy (and cost less to maintain), helping accelerate AI product delivery. Businesses must meet the need for immediate results with quarterly delivery cycles. I teach this in my courses and implement it with clients. Initiatives lose momentum if business leaders don’t see tangible returns within a year. As I said earlier, CEOs are on the clock. Their early enthusiasm for AI’s potential quickly fades when they realize the returns are over a year away.
SLMs play a large part in Multi-Agent Systems. In MAS, an open-source LLM can take a user request and return a set of actions that will satisfy the user’s intent. It then passes these actions to other SLM-based agents, which are built to execute narrowly defined tasks. Typically, a knowledge graph with supported intents, steps, and agents is used to create guardrails around the entire system.
Agentic AI systems support complex, multi-step workflows that require several calls to the underlying models. The faster and cheaper inference of models like Phi-Gemma-Llama is a game-changer (used unironically) for the cost and feasibility of agentic platforms. The LLM is only called twice in the workflow, once to detect and parse the intent, and once to deliver the outcome or result to the user.
Curated Data & Information Architecture
Leveraging the business’s first-party data as a primary competitive advantage is a dominant strategy that I teach and leverage with clients. Unique, curated datasets provide a sustainable moat, as models commoditize quickly. Contextual data reduces the need for massive datasets and complex models, making model training less expensive and more efficient. Higher-quality data can also reduce the total amount of training data required per use case.
Curated datasets become AI products faster. A focus on data curation and infrastructure upfront prevents backtracking later in the development process. Businesses that attempt to go straight to AI typically walk that strategy back after a year or two. I advise clients to start with a data strategy, focusing on contextual data, which enables the delivery of value incrementally, starting with simple data products and descriptive models trained on high-quality data. This approach is also one of the foundations of my courses.
That’s how the early AI maturity journey gets monetized, and another way to avoid losing momentum. Early initiatives deliver returns quickly and build the foundation for more advanced AI products. Data and AI teams demonstrate value and maintain buy-in for continued investment on the journey to much larger returns.
There’s a two-way dependency between SLMs and information architecture. They amplify each other. Aligning data models with operations and products via information architecture connects improvements directly to business and customer outcomes. This structure supports iterative improvement and optimization cycles.
Connecting The Dots
It’s not enough to build a data and AI strategy that eventually delivers value. CEOs need an optimized journey that manages costs and pulls ROI forward. Information architecture and buying vs. building for internal use cases deliver immediate value. There are more resources available for AI initiatives and less risk of losing momentum along the way.
Holistic strategy is enterprise-wide. The technology organization can only do so much without business-level transformations. Early returns and success support coalition-building. Other parts of the business can quantify the impact and value of AI initiatives, providing a type of social credit to the data and AI teams. That’s critical to get buy-in for enterprise-wide change.
When you’re ready to take on a high-demand role guiding your business’s data and AI strategy, there are still 5 seats left in my next Data & AI Technical Strategist Certification cohort. Act fast, there are only two weeks left, and my last cohort sold out. Learn More and Enroll Here.