Most questions data scientists, analysts, strategists, and product managers are asked don’t capture the full complexity of the challenges behind them. LLMs fail for the same reasons we do. As a C-level advisor, I need to understand the question being asked and all the context the person asking it lacks. I watch advisors fail by answering a question the CEO asks, not the questions they need to ask. Questions are traps or doorways.
You must design a thought process for yourself to answer questions completely, and you must design products that leverage foundational models in the same way to achieve any outcome. The Royal Dilemma is an AI product design challenge. LLMs are positioned to be intelligent assistants and advisors, but those designing GenAI products don’t understand how people in these roles think.
Everyone in a data role has more context about what the technology is capable of and what data is available than the teams and customers we build products for. Early maturity businesses struggle to articulate their needs because they lack the context. Answering questions completely is an exercise in teaching people about the true nature of their problems and challenges. We must also provide context about the potential range of available solutions.
There are multiple ways to answer questions, and each reveals something about the person’s competence or AI product design quality. I tell students that questions are essential to make my course content more relevant. But how do you know what questions to ask without the context to know what questions are important? How can you properly frame a question when you lack the context to understand your intent?
I tell students that they don’t need a fully formed question. A comment or a general question is enough. I have taught the concepts long enough and applied them for even longer. I know the question they’re trying to ask and the information they are looking for. GenAI products aren’t designed that way.
A gatekeeper understands the complexity of the question and answers the question completely. An advisor teaches the context needed to ask the right question next time and then answers the question.
Gatekeeping VS. Advising
I’ll let you in on a massive secret. CEOs, advisors, and board members have been taught how to think and make effective decisions. Cassie Kozyrkov teaches decision hygiene. Annie Duke teaches high-stakes decision-making frameworks that manage uncertainty. Allie K Miller teaches how to change our thinking about the world to adopt an AI-first perspective. I teach data and AI-augmented decision-making frameworks to manage complex dynamic systems holistically.
You get paid a lot to teach people what to think about data and AI. You get even more to teach executives how to think about data and AI. They back the money truck up if you can take it one step further. Successful GenAI product design requires an understanding of all three levels.
The concept differs, but the first principles extend to all foundational model categories. AI product design requires an expert-level understanding of why experts do things the way they do and why they deliver outcomes better than alternative approaches.
In this article, I will give you a look at a different level of thinking and decision-making. A single question can tear someone down or rebuild someone with new insights and capabilities.
If I were only capable of teaching people how to think about the Royal Dilemma, I’d say the cliched, “I can’t tell you the answer. The point of the question is to learn how to think, and you must find your own answers.” I’d explain all the factors involved in answering the question, and you’d feel like you had this new insight into the world.
But you’d encounter a different question in the same category and be unable to transfer those insights to it. That realization would bring you back to my next class, seminar, or coaching session. A gatekeeper makes you pay for the same answer multiple times. To solve that problem, you must understand The Gatekeeper’s Lament or The Advisor’s Bonus. I may continue the series to teach those if this post is popular.
A gatekeeper is an intelligent assistant that maintains control over context and frameworks. They only deliver knowledge of how to think. An intelligent advisor does much more. To design the product, you must understand the expert, not just what the expert does or how they do it.
The Royal Dilemma & The Trap Of Methods
The Royal Dilemma is a classic strategic thinking assessment question and teaching tool. It is essential for data scientists, analysts, AI product managers, and AI strategists to learn to answer it.
You’re advising a CEO who is playing a poker machine. They have no opponents, and their winnings are based on their final hand. The game starts by dealing 5 cards. The CEO can hold some, none, or all the cards. If they choose to discard, those cards will be replaced with new ones from a single deck of cards. For example, if they discard two cards, they will be dealt two new ones. After this phase, the game ends, and the CEO is paid based on their hand.
The CEO has been dealt a flush but is one card away from the Royal Flush. They can hold their current hand and win $30,000 or discard one card and draw for a Royal Flush that pays $4,000,000. Only 1 card in the deck (47 remaining cards) will lead to the Royal Flush.
How do you convince your CEO to make the right decision?
The Royal Dilemma is a multi-level trap. The level a person falls for reveals their proficiency at leveraging analytics and models to deliver business impacts. Are they a tactical advisor, a gatekeeper, or a strategic advisor? Everyone starts out as a tactical advisor because that’s how we are taught to think.
Tactical advisors explain the math behind both choices. When I asked this question on LinkedIn, the LLM-generated comments delivered tactical advisor-level answers. Holding the flush has an obvious expected return. Drawing for the Royal could result in a straight, a flush, a pair, or the Royal. If we calculate the probability of each outcome with their returns, we get a much higher expected value/return or lower loss than holding the flush.
By the time we start talking about probable outcomes and expected returns, we have lost the audience. They don’t care about the methods or how we got to the answer. CEOs also don’t care about average value. They aren’t making this decision 1000 times. There is a single decision before them, and averages over time become meaningless in an applied decision scenario with few trials. That’s one reason classical economics, statistics, and analytics are insufficient tools.
Bias & Perception Traps
Every word in the question sets a different trap, and I’ll explain the solution one trap at a time.
| How | do | you | convince | your CEO | to make | the right | decision |
How. The first word sets the stage for a tactical answer. The question must be reframed strategically to see past the first trap. If I tell you how to do something, I will give you the tactics without insight. If I only deliver insights about how to answer the question, you’ll only understand how to answer simple variations of this question, but not how to generalize those insights to answer new questions in the same category.
Do. The next trap plays the bias toward action against us. Do is a command that implies we should take immediate action. For some, this trap causes them to act on the first data point they find. For others, it causes them to provide a single answer based on multiple data points and a thorough problem assessment.
You. The burden is transferred to the individual answering the question, which implies that this is an individual activity. Strategic decisions are rarely made based on a single person’s opinion, so approaching this problem individually leads to failure. There are always other key advisors with different perspectives and areas of expertise. Part of the advisor’s job is to know what domain expertise and perspective to deliver. But also which ones are best managed by another advisor.
Convince. Convincing the CEO to take a course of action forces us to sell the solution. The salesperson’s problem is they must overcome objections. If the CEO sees their advisor as a salesperson, they don’t start from a position of trust. Most of their time is wasted discussing doubts, not delivering an outcome. Advisors deliver options and alternatives. Their explanations of each one support the CEO in making the right decision. The expert advisor avoids telling a CEO what to think or how.
Your CEO. This reinforces the trap of individual effort and opens a new one, an individual target. Just as strategic decisions are rarely made based on a single person’s opinion, they are also rarely made by an individual, even a CEO. Targeting a single decision-maker misses other key decision influencers. Every answer an advisor delivers targets multiple audiences and their collaborative decision-making process. To succeed, we must know what role each decision influencer plays in the final decision and how that impacts the decision outcome.
To Make. This reverses the bias toward action, and if we fall into this trap, we craft an answer that assumes the CEO also has a bias toward action. CEOs are strategists, and advising them to act without a strategic assessment will fail no matter how good the case and advice are. The answer provides the next steps before the CEO has decided. The conversation can be derailed by discussions of the tactics instead of focusing on the decision that must come first.
The Right. The assumption trap reinforces binary thinking. One of the two decisions is right, so the other must be wrong. If the CEO seems to be leaning toward the wrong decision, an advisor who falls into this trap will fight even harder to get them to change their minds. It’s easy for the advisor to become an adversary, and the more they insist on a different path, the more credibility their advice loses. Focusing on the right decision misses the larger point. Advisors deliver context, and we trust CEOs to know how to make decisions. If they make a different decision than we would have, our job is to understand why they came to that decision. Understanding their decision-making process informs us of what to do differently next time so we’re more aligned.
Decision. The last word is the most overlooked trap. It prescribes the wrong outcome, and falling into that trap creates a shortsighted answer. Strategic leaders can manage people above them, like CEOs, regardless of their title or position. Strategy is their source of authority, so they lead by aligning people on an outcome. Even a strategic leader can be thwarted if they are misdirected to align with the wrong outcome. The previous trap is a setup for this one as well. An advisor delivers an outcome, and the decision is a midpoint in a much larger chain. It’s critical to see the chain and know how each decision contributes to the desired outcome.
GenAI Product Design Patterns & Principles
Most LLMs are trained to answer questions with baked-in assumptions that only hold true for a small segment of questions. Hallucinations are the result of LLMs falling into the first 3 traps. They are built to deliver details without insight, so their answers feel hollow and incomplete. LLMs provide answers when they shouldn’t and work independently when they should be part of a collaborative system.
Small models are more effective than a single large model. Each model is trained to be a domain expert advisor and isn’t hampered by the noise of other domains. The small model is used intentionally in ways that align with the domain it was trained on. Multiple small domain models work collaboratively to deliver an outcome. Other models function as the interface and orchestration layers.
Orchestration requires understanding what parts must be orchestrated to serve the intent and deliver the outcome. That’s the most complex aspect of AI product design. Software is deterministic, and design patterns are built for that product category. AI is stochastic, and we must develop new design patterns to support a new product category.
Seeing the need for new design patterns is difficult because simple models behave deterministically. As complexity rises, stochastic functionality emerges, and there’s no warning when that line is crossed. Deterministic products are purely functional. Stochastic products perform multiple functions with multiple levels of reliability.
People interact with stochastic products differently. Model complexity leads to broader functional generalization. People experiment and find new uses for stochastic products, many of which are outside the product’s intended functionality. Again, we need new design patterns to support a new usability paradigm.
AI product design requires an expert-level understanding of the product’s functional domains. Until AI product designers learn to solve The Royal Dilemma and dozens of other challenges like it, AI products won’t meet user expectations or live up to their potential.
Fascinated by this idea of the Gatekeeper vs the Advisor. I’ve been considering some ideas from psychology about attitudes/efforts that are “generative” vs “not generative” or even “destructive” and these seem to fit that thought pattern.
The Gatekeeper would say they contribute to efforts to build or generate, but your description shows the harm they do, even if by omission. The Advisor is clearly the one that is contributing to building and growth.
Would love to hear more about these roles and the bonus vs lament you mention.
I found the entire article eye-opening - I appreciate the nuance you present.