Thank you for being part of the community, whether as a paid or free subscriber. I deeply appreciate your support, from likes and reshares to Founding Members and monthly subscribers who make this newsletter possible. Paid subscribers get:
15% Off Most Courses & Personal Services
Access To Weekly Office Hours
Reserved Spots In Instructor-Led Courses
The Ability To Comment & Request Posts On Topics Of Interest
Access To The Full Library With Over 600 Articles
The updated office hours links and course discount code are at the bottom of this article. Please take advantage of your subscriber benefits. I am excited about two instructor-led cohorts I’m starting in January:
Data & AI Strategist Certification
Data & AI Product Management Certification
My three best-selling self-paced courses will be on sale through the end of January. You get almost half off their regular price, so take advantage of the sale before it’s over.
Most AI Is Illegal & Unethical By Nature
Yes, but what do I really think about AI? Joe Reis talked about personal branding on LinkedIn, and my brand is often direct, to the point, and contrarian. It’s what CEOs, boards, and founders pay me for. That’s why you’ll see me on top 10 lists with people like Andrew Ng and Fei-Fei Li. We can’t fix problems we dance around or build in promising directions we refuse to acknowledge.
Most AI is illegal and unethical by nature, but that’s an engineering challenge, not an indictment of the technology. Nuclear energy produces radiation that’s extremely dangerous to most life on Earth. We can only get the benefits after solving the engineering challenges. Pretending that gentle glow is harmless has short-term benefits but sidelines practical applications. When a technology’s harms are felt, a backlash and overreaction are inevitable.
OpenAI released Sora this week, and less than a day later, questions about its training data surfaced. Sora generates videos based on user prompts and does a great job of recreating popular video games. OpenAI has put safeguards at the prompt level to prevent people from asking for a copy of Super Mario Bros. However, it’s still possible to generate a video that looks a lot like the game with creative prompting.
Gemini wants me back, and I must admit it has improved greatly this year. But I can’t go back because Google still hasn’t resolved the underlying exploitation of any data I send. I can’t trust it, so I can’t work with it.
A Theft Of Personal Data In Plain Sight
Exploiting personal and proprietary data to train large foundational models will cause a backlash that sidelines the technology. The denials are childish. Hiding behind lengthy terms and conditions or legal loopholes is lazy. Companies like Google are making tens of billions and stand to make trillions more over the next decade.
Spend the cash and take the time to source AI training data ethically. Rearchitect to reduce the model’s need for data. Had Google started two years ago, Gemini would be a clear leader for enterprise use cases today.
Companies like Salesforce, SAP, IBM, Amazon, Adobe, and many others did. All five companies started with questionable data-gathering practices. They listened to customers, updated data usage policies, and engineered solutions. My clients and I can do business with these companies in ways we can’t with Google and OpenAI. The GenAI bubble would have already burst, and a business revolt would be in full swing if not for companies that chose short-term costs.
It's one thing to acknowledge a problem and intentionally choose not to solve it. It’s something very different to hide that decision behind a façade of ignorance or outright deception. “It’s not a real problem, so there’s nothing here to solve.” We can’t indulge people who deny data thefts that occur in plain sight. We can’t enable gaslighting with watered-down language. Weak responses allow weak people to get away with ridiculous behavior.
OpenAI is a serious company making increasingly serious products, but its founder isn’t a serious person. Serious AI builders left Google to bring LLMs to market because they got tired of leadership who were unwilling to choose short-term costs. Many of the same people left OpenAI because they couldn’t get its leadership to take on obvious challenges.
A Growing Sentiment In People Who Are Tired Of Waiting
Stuart Winter-Tear brought up a trend during this week’s office hours and on LinkedIn. A growing number of people are frustrated with leadership and business units that refuse to change even when it’s in their own best interests. Based on what Stuart (and many others) are seeing, their numbers are growing rapidly.
The frustration centers on a denial of reality by the people who run the business. Data and AI create opportunities and threats. Solutions aren’t simple and often involve work at every level of the business. If businesses don’t mature and integrate AI into operations and products, they’ll quickly fail.
I teach approaches to getting buy-in from C-level leaders, overcoming objections, building coalitions from the bottom up, and moving aside those who irrationally resist data, analytics, machine learning, and AI. One or two people in every cohort express frustration with leaders who refuse to support initiatives with obvious, significant value and throw up barriers to change against all common sense.
In every client, I find people who have worked to move the business forward and are burnt out by the constant resistance they encounter. They are tired of explaining the same realities and making the same case for months before work begins on each and every new initiative.
Some are leaving for businesses and roles they will have a bigger impact on. Others launch a business to do what their last employer refused to. There’s a growing realization that a small team of focused problem solvers can do more than a big business filled with leaders who have grown soft during good times. Even the current crop of AI-first startups isn’t immune from the trend.
To Go Fast, Go Alone. To Go Far, Go With A Team
In 2012, I was frustrated with a business that refused to change and solve real problems. Like many people are now, I was tired of corporate ineptitude and decided to do it myself. I built quickly in the first two years but hit a plateau equally fast. I built a community, and it helped me continue to grow. I trained over 5000 others to do what I can, and today, my growth can scale beyond what I accomplish alone.
People don’t build the amazing things they are capable of because they don’t have all the pieces in place. You can find all the pieces within this community, so I’m offering you access. Over the next month, I will kick off programs that fill gaps and connect problem solvers. My community has:
VCs and people in private equity who are sick of the hype and waiting for vaporware startups to grow up.
People are looking for AI products that actually work and meet their needs, both personal and business.
Data and AI builders who are done watching their best work get shelved.
People I have trained myself who can provide advisory and fractional data and AI leadership or join a team full time.
Access to distribution. Sales and marketing are the biggest challenges for technical founders. This community brings AI products that work to people who write checks.
Before handing out other people’s time, I plan to give away some of my own. You shouldn’t have to pay to get your resume reviewed, and too few people review senior++ and director+ resumes. Send your resume, current challenges or roadblocks, and 1-year career objectives to resumes@endgameengineering.com between now and the end of December. I’ll review it and provide actionable feedback.
In return, I ask that you check back with me at the end of 2025 and tell me if my feedback helped you achieve any outcomes.