The 5 Questions to Ask Before Starting Any AI Project
Before you invest in AI, answer these five questions. A practical framework for business leaders who want results.
Here’s a pattern we see all the time.
A company decides it’s time to “do something with AI.” Maybe the board is asking questions. Maybe a competitor just announced an initiative. Maybe the CEO read an article on a flight and came back energized.
So they hire a consultant or buy a platform. Six months and six figures later, they have a tool nobody uses, a team that’s frustrated, and a lingering suspicion that AI was overhyped all along.
The technology wasn’t the problem. The problem was starting without asking the right questions first.
Before you spend a dollar on AI, before you sign a contract or sit through a demo, there are five questions you need to answer. They’re not technical questions — you don’t need an engineering background to work through them. But they’re the difference between an AI project that delivers real value and one that becomes an expensive lesson in what not to do.
Question 1: What specific problem are we trying to solve?
This sounds obvious. It isn’t.
“We want to use AI” is not a problem statement. Neither is “we need to be more innovative” or “we should automate more.” These are aspirations, not problems. And aspirations don’t give you anything to measure against.
A real problem statement is specific and concrete:
- “Our sales team spends 15 hours per week manually updating CRM records, and the data is still unreliable.”
- “It takes us three weeks to generate quarterly reports because we’re pulling from six different systems.”
- “Customer inquiries sit in a queue for 48 hours before anyone responds, and we’re losing deals because of it.”
Notice what these have in common. They identify a specific process. They quantify the pain — in hours, weeks, or dollars. And they point to a consequence that matters to the business.
If you can’t articulate the problem this clearly, you’re not ready to start an AI project. You might be ready for a discovery conversation, but you’re not ready to build anything yet.
The test: Can you explain the problem in one sentence to someone outside your company, and have them immediately understand why it matters?
Question 2: What does success look like — in numbers?
This is the question that separates serious initiatives from expensive experiments.
Before you begin, you need to define what winning looks like. Not in vague terms like “improved efficiency” or “better insights,” but in numbers you can actually track.
- “Reduce the time spent on report generation from three weeks to three days.”
- “Cut customer response time from 48 hours to under 4 hours.”
- “Decrease data entry errors by 80%.”
- “Free up 10 hours per week per salesperson for actual selling.”
These targets do two things. First, they give you a clear benchmark to measure against. You’ll know whether the project worked or not — there’s no ambiguity. Second, they force you to think about ROI before you’ve spent anything.
If a project will save 10 hours per week across a team of 20 people, that’s 200 hours per week — roughly 10,000 hours per year. What’s that time worth? What would your team do with it? Can you put a dollar figure on the value created?
If you can’t define success in measurable terms, you have no way to evaluate whether the investment was worth it. And if you can’t evaluate the investment, you’re just hoping it works out. Hope is not a strategy.
The test: If someone asked you in six months whether the project succeeded, would you have a clear, quantifiable answer?
Question 3: Why hasn’t this been solved already?
This question catches people off guard. But it’s essential.
If the problem you’ve identified is real and painful, there’s usually a reason it hasn’t been addressed yet. Understanding that reason will tell you a lot about whether AI is actually the right solution — and what obstacles you’re likely to face.
Sometimes the answer is straightforward: “The technology to solve this didn’t exist until recently.” Fair enough. AI capabilities have advanced dramatically, and problems that were unsolvable two years ago may be solvable now.
But often, the real barriers are organizational, not technical:
- The data isn’t there. You can’t train an AI on data you don’t have, or data that’s scattered across disconnected systems. If the problem is fundamentally a data problem, AI won’t magically fix it.
- The process is undefined. If nobody actually knows how decisions get made today — if it’s all tribal knowledge and ad hoc judgment calls — AI won’t be able to replicate or improve it. You need to understand the current process before you can enhance it.
- There’s no ownership. The problem falls between departments, and nobody has the authority or incentive to fix it. AI won’t solve a governance problem.
- Change is politically difficult. Someone benefits from the current inefficiency, or people are resistant to changing how they work. Technology doesn’t overcome organizational resistance — it often amplifies it.
Being honest about these barriers upfront will save you from launching a project that was never going to succeed, regardless of how good the technology was.
The test: If the problem is solvable with AI, why hasn’t it already been solved? What’s actually been in the way?
Question 4: Who will use this, and what will change for them?
AI tools don’t exist in a vacuum. They get used — or don’t get used — by real people with real jobs and real habits.
One of the most common failure modes in AI projects is building something that works perfectly in a demo but never gets adopted. The technology functions fine, but it doesn’t fit into how people actually work. It adds steps instead of removing them. It requires behavior change that nobody was prepared for.
Before you start, you need to understand:
Who are the end users? Not the executives sponsoring the project — the people who will actually interact with the tool every day. What are their current workflows? What do they care about? What frustrates them?
How does this fit into their existing work? Does the AI solution slot into tools they already use, or does it require them to adopt something new? The more friction you introduce, the less likely adoption becomes.
What are you asking them to do differently? Be specific. If a salesperson currently logs calls manually and your new system asks them to review AI-generated summaries instead, that’s a change. It might be a good change — but it’s still a change that needs to be managed.
What’s in it for them? This is the question that matters most. If the AI makes their job easier, faster, or more rewarding, they’ll use it. If it feels like surveillance, extra work, or a threat to their role, they’ll resist it — consciously or unconsciously.
The best AI implementations are the ones where end users can’t imagine going back to the old way. The worst are the ones where end users find workarounds to avoid the new system entirely.
The test: Have you talked to the people who will actually use this? Do they see it as help or burden?
Question 5: What happens if this works?
This might be the most overlooked question of all.
Assume the project succeeds. The AI does exactly what you hoped. Now what?
Can you scale it? If a pilot works for one team, can you roll it out to ten teams? What infrastructure, training, or support would that require?
What new problems emerge? Success often creates its own challenges. If you automate customer inquiries, you might suddenly expose a bottleneck in your escalation process. If you speed up report generation, you might reveal that nobody was actually reading those reports in the first place.
How will roles evolve? If AI handles tasks that someone used to do, what does that person do now? The answer isn’t necessarily fewer jobs — often, it’s different jobs. But you need to think this through before it happens, not after.
What’s the next step? If this project delivers value, what does that unlock? Are there adjacent problems you could tackle? Does success in one area create demand for AI in others?
Thinking about what happens after success isn’t just strategic planning — it’s also a forcing function for clarity. If you can’t envision what success leads to, you might not be solving a problem that matters as much as you thought.
The test: If this project exceeds expectations, are you prepared for what comes next?
How to use this framework
These five questions aren’t a checklist to rush through before a kickoff meeting. They’re meant to be worked through seriously, with the right people in the room, before any real commitment is made.
If you can answer all five questions clearly and confidently, you’re in a strong position. You’ve identified a real problem, defined what success looks like, understood the barriers, considered the people involved, and thought about what happens next. That’s more preparation than most AI projects get — and it dramatically increases your odds of success.
If you can’t answer one or more of these questions, that’s valuable information too. It tells you where you need to focus before moving forward. Maybe you need a deeper discovery process. Maybe you need to align stakeholders. Maybe you need to fix underlying data or process issues first. Without this clarity, most AI projects fail.
Either way, you’re better off knowing now than finding out six months and six figures later.
The real point
Here’s what this framework is really about: discipline.
The AI hype cycle creates pressure to move fast, to not get left behind, to do something. That pressure leads to rushed decisions, poorly scoped projects, and expensive disappointments.
The companies that succeed with AI are the ones that resist that pressure long enough to think clearly. They ask hard questions before they start. They define success before they spend. They understand that the technology is the easy part — the hard part is knowing where it belongs and how it fits.
That’s not caution for caution’s sake. It’s how you ensure that when you do move, you move in the right direction.