Throughout my career, I've started over 30 businesses. Some succeeded wildly, most failed spectacularly, and quite a few landed somewhere in between.
Looking back at this entrepreneurial rollercoaster, there's one clear pattern: The businesses that failed weren't necessarily bad ideas—they were unvalidated ideas that I fell in love with too quickly.
I became emotionally attached to my vision and refused to "kill my darlings" when early warning signs appeared. By the time I finally admitted there wasn't a real market need, I'd already invested months of work and thousands of pounds.
The businesses that succeeded? They all started with rigorous problem validation. I wasn't building products because I thought they were cool—I was solving problems that had been thoroughly verified to exist, be painful, and worth paying to solve.
This is the entrepreneurs' dilemma: We need passion to fuel our journey, but that same passion can blind us to reality. The solution is a systematic validation process that forces us to confront the truth about our ideas—preferably before we've invested significant resources.
Let's get started:
Initial validation using existing evidence
Deep validation through targeted research
Real-world validation with minimal investment
In Parts 1-3 of this series, you've learned how to discover, categorise, and prioritise potential business problems. By now, you should have identified 2-3 promising problem clusters or portfolios that seem worth solving with AI.
But here's a sobering reality: Your judgment about which problems are worth solving is likely wrong.
Not completely wrong, but wrong enough that blindly following your intuition is risky.
The reason is simple: We all suffer from confirmation bias. Once we think we've found a great problem to solve, we unconsciously look for evidence that confirms our belief while ignoring evidence that contradicts it.
We want this to work so will see evidence accordingly.
So we’re going to get systematic.
There are three phases to effective problem validation:
Initial Validation: Quick research to confirm basic viability
Deep Validation: Thorough investigation with AI assistance
Real-World Validation: Simple tests with actual potential customers
Let's explore each phase with practical methods you can implement immediately.
The first phase is about quickly confirming that your problem has basic merit. You're looking for existing evidence that:
The problem actually exists
People are actively trying to solve it
There's a market willing to pay for solutions
If we can’t get this far then there’s no point going forward.
Here are three effective methods for initial validation:
Contrary to common belief, the existence of competitors is often a good sign—it means there's a market willing to pay for solutions to this problem. Absence of competition might indicate the problem isn't worth solving (or very rarely, that you've found a blue ocean opportunity - it’s unlikely!!).
Competition risk we can deal with. Market risk we cannot.
For each problem, research:
Direct competitors solving exactly this problem
Adjacent solutions that partially address the problem
DIY methods people currently use as workarounds
What you're looking for:
Evidence of established businesses in this space
Pricing models and approximate market size
Customer reviews highlighting limitations of current solutions
Use an AI Research model for this ideally.
Search volumes can provide quantitative evidence of problem significance. If many people are searching for solutions, the problem is likely real.
It's important to note that this is one area where AI isn't particularly helpful on its own. At least not yet!
AI models don't have access to current search volume data from Google, so you'll need to manually use keyword research tools and then feed the data to AI for analysis, or simply do this research yourself (which doesn't take long and gives you valuable market insights).
Use tools like Google Keyword Planner (free), Ubersuggest, or AnswerThePublic to look for:
Monthly search volumes for problem-related terms
Trend direction (is interest growing or declining?)
Cost-per-click (higher CPCs often indicate commercial intent)
Post about the problem (not your solution) to your audience or relevant communities to gauge response.
This is where having your own audience becomes incredibly valuable. If you've built a social media following, email list, or community in your target industry, you have a ready-made validation group at your fingertips.
Some effective approaches:
"Does anyone else struggle with [problem]?"
"How do you currently handle [specific task]?"
"What's your biggest frustration with [process]?"
Pay attention to engagement levels, specific pain points mentioned, and enthusiasm about potential better solutions.
If your problem passes initial validation (evidence of competitors, search volume, and forum interest), move to the next phase. If not, reconsider whether this problem is really worth solving.
Once a problem passes initial validation, it's time for deeper investigation. This phase is about gathering detailed information and actively challenging your assumptions.
AI can help you analyse vast amounts of information about your problem space. However, we need safeguards against hallucination and confirmation bias!
If we ask it if something is a good idea AI tends to be agreeable and say “yessir it’s great!”. That’s not helpful!
Design prompts that actively challenge your assumptions rather than just seeking confirmation:
You are a skeptical business analyst evaluating the following business problem:
[Insert problem description]
First, argue AGAINST this being a significant problem worth solving with AI.
Then, argue FOR this being a significant problem worth solving.
Based on both perspectives, provide your balanced assessment:
- Is this likely a real, significant problem?
- What additional information would help validate or invalidate it?
- What specific aspects of the problem seem most promising?This "red team/blue team" approach helps overcome confirmation bias and generates more reliable insights.
Nothing beats talking directly to experts in the field. These could be professionals who experience the problem firsthand, consultants who work in the industry, or vendors of adjacent solutions.
Aim to conduct 3-5 expert interviews for each problem cluster you're validating, focusing on:
How the problem manifests in their daily work
Current solutions and their limitations
The value of solving this problem more effectively
Decision-making processes for adopting new solutions
What are you missing?
Keep these interviews conversational and be genuinely curious. You'll often uncover aspects of the problem you hadn't considered, which can dramatically improve your eventual solution.
The final and most reliable phase is to test your problem assumptions in the real world without building a full solution.
Ultimately we have to move away from asking AI. Sorry!
Now we need to legitimately out our ideas in front of potential customers.
Create a simple landing page that:
Describes the problem you've identified
Outlines your proposed solution concept
Includes a call-to-action that requires commitment
Generally the commitment will be joining a waitlist. I’m currently doing this with the AI Automation Accelerator - we’re at 1,200 people on the waitlist which is solid verification of market demand, especially because the verification target was 500.
Drive targeted traffic to this page through social media posts, small ad campaigns ($100-200 budget), or direct outreach to potential customers. Again, having an audience makes this much easier!
A well-constructed landing page test can validate both problem and solution fit before you build anything.
One of the most powerful validation techniques is to manually simulate your AI solution before building anything—the "Wizard of Oz" technique.
Many now massive companies started this way. For example, Grubhub began with just a website and founders who manually called restaurants to place orders on behalf of customers.
Orders would come in on the Grubhub website and they’d call the restaurants and place the order, pocketing the difference… mad huh? There was no sophisticated logistics system—just people pretending to be the technology until they validated the concept and could build the real thing.
This approach:
Validates willingness to use (and potentially pay for) your solution
Gives you deep insights into exactly what customers need
Helps you understand the nuances of the problem
Builds a base of reference customers before you build anything
Importantly, you don't need to tell people you're doing things manually behind the scenes. The goal is to test the value proposition, not the technology.
If your problem has passed the previous validation stages, consider setting up a paid pilot program:
Approach 5-10 potential customers who would benefit most
Offer a discounted pilot program (still paid, but lower than your eventual pricing)
Deliver the solution manually or with minimal automation during the pilot
This approach:
Confirms actual willingness to pay
Provides revenue while you're still validating
Gives you deep customer insights
Creates case studies for your full launch
The key is to set clear expectations about the pilot nature while still delivering genuine value to participants. Be super open about the fact that they are “beta testers” and price accordingly. If anything they’ll get much better white glove service. And at a lower price! Great for them!
In our final part tomorrow, we'll explore how to turn your validated problems into profitable AI businesses. We'll cover:
Building a portfolio of complementary problems
Matching problems to the right AI technology approach
Creating service packages and pricing strategies
Initial marketing and customer acquisition approaches
Between now and then, select your most promising problem cluster and run it through at least one validation method from each phase. Your goal is to have at least one thoroughly validated problem ready for solution development.
Kyle