Before You Build Anything: The Strategic Questions Every AI Roadmap Must Answer First
April 11, 2026

There is a scene playing out in thousands of organisations right now. A leader has declared “we need an AI strategy consultant” A team has been assembled. A budget has been approved. And the very first question asked is “what should we build?” That question is asked in good faith. It is also the wrong question. Asking “what should we build” before asking harder, more uncomfortable questions is why most AI projects fail. Not because the technology is bad. Because the strategy was never there.
An AI roadmap is not a list of features. It is not a timeline of model deployments. It is a set of answers to questions most organisations are too busy to ask. Before you write a single line of code, before you train a single model, before you sign a single vendor contract, answer these ten questions. Your roadmap depends on it.
1. What Problem Are We Actually Trying to Solve?
This sounds obvious. It is almost never answered honestly. Most AI projects start with a solution in search of a problem. Someone read about generative AI and got excited. Someone saw a competitor launch a chatbot and felt anxious. Someone attended a conference and came back with vendor brochures.
The real question is harder. What is the actual pain? Not the imagined pain. Not the pain the vendor says you should have. The actual, measurable, customer-complaint, employee-frustration, process-broken pain. Write it down in one sentence. If you cannot write that sentence without using the words “AI,” “machine learning,” or “intelligence,” you do not have a problem. You have a technology looking for a justification. Go back. Start again.
2. What Will We Stop Doing to Make Room?
Every AI project requires attention. Attention is finite. If you are spending attention on a new AI initiative, you are taking attention away from something else. What is that something else? Most roadmaps do not answer this question. They assume that AI is additive. That it slides into empty space. There is no empty space.
The honest answer is brutal. You will stop maintaining some legacy system. You will stop investing in that manual process. You will stop saying yes to other feature requests. Name the thing you are stopping before you name the thing you are starting. If you cannot name it, you are not ready to start.
3. Do We Have the Data or Are We Hoping?
This is where AI roadmaps go to die. The team builds a beautiful plan. The timeline is reasonable. The model architecture is sound. Then someone asks “where is the training data?” And the room goes silent. The data exists in theory. In practice, it is spread across seventeen spreadsheets, three databases, and a filing cabinet in the basement.
Before you build anything, audit your data. Not next quarter. Now. What do you have? What format is it in? Is it labelled? Is it clean? Is it representative? Is it legally permissible to use? If the answer to any of these questions is “we are not sure,” you do not have an AI roadmap. You have a data acquisition project wearing a costume.
4. Who Is Accountable When This Goes Wrong?
Every AI system fails. Not possibly. Certainly. Models drift. Edge cases appear. Users behave in ways the training data never saw. The question is not whether failure will happen. The question is who owns the response.
Your roadmap must name a human being. Not a team. Not a role. A specific, named person who wakes up when the system fails. That person has the authority to stop the system, override its decisions, and communicate with affected users. If you cannot name that person before you build, you are building a system with no accountability. That system should not be built.
5. What Is the Cost of Being Wrong?
AI systems make mistakes. The cost of those mistakes varies dramatically. A movie recommendation system that suggests a film you dislike costs nothing. A fraud detection system that freezes a single parent’s bank account before rent is due costs everything.
Your roadmap must answer: what is the worst-case cost of a false positive? What is the worst-case cost of a false negative? Not the average cost. The worst case. If those numbers are high, your roadmap needs human review loops, appeal processes, and fallback systems. If your roadmap does not include those things, you have not done the risk analysis. Stop. Go back.
6. How Will We Know If This Is Working?
Most AI roadmaps define success as “the model launched.” That is not success. That is shipping. Success is changed behaviour, reduced cost, increased revenue, or improved satisfaction. Measurably. Quantifiably. Before you build, define the metric that will tell you whether the project mattered.
The metric must be pre-deployment, pre-training, pre-anything. Write it down. Attach a number. “Reduce customer support tickets about login issues by 30% within three months.” That is a metric. “Improve customer experience” is not. If you cannot define the metric before you build, you will never know if you succeeded. And you will keep funding the project forever because no one can prove it failed.
7. What Happens to the Humans Currently Doing This Work?
This is the question everyone avoids. If your AI project succeeds, it will change someone’s job. Maybe it will eliminate tasks. Maybe it will eliminate roles. Pretending otherwise is not kindness. It is cowardice.
Your roadmap must name the humans affected. It must include a transition plan. Retraining budgets. Timelines. Communication. If your roadmap does not include these things, you are not doing strategy. You are doing surprise. And surprise destroys trust faster than any failed model. Answer the question before you build. Then answer it again.
8. What Is the Simplest Non-AI Alternative?
Here is a test. Before you build an AI solution, ask: “Could we solve this problem with a rule, a spreadsheet, or a human?” If the answer is yes, do that instead. AI is expensive. AI is fragile. AI requires maintenance. Rules, spreadsheets, and humans are cheap, understandable, and flexible.
Your roadmap must include the non-AI baseline. Build that first. See if it works. If it works, you are done. Congratulations. You saved hundreds of thousands of dollars. If it does not work, you have learned something valuable about the problem. That learning will make your AI project better. Skipping the baseline is not efficiency. It is arrogance.
9. How Will This System Be Maintained After Launch?
The launch is not the end. It is the beginning of maintenance. Models need retraining. Data pipelines need repair. User behaviour changes. Edge cases accumulate. The roadmap that stops at launch is a roadmap to technical debt.
Your roadmap must include a maintenance plan. Who monitors performance? How often do they retrain? What is the budget for ongoing work? What is the sunset plan for when the system is no longer useful? If you cannot answer these questions before you build, you are not building a system. You are building a future crisis.
10. What Would Make Us Stop?
This is the most important question. And almost no roadmap answers it. You will invest months. You will spend money. You will convince people. The project will take on momentum. And at some point, you will have evidence that it is not working. Will you stop? Or will you keep going because you have already spent too much to quit?
Your roadmap must include kill criteria. Explicit, measurable, pre-agreed conditions that trigger a shutdown. “If accuracy falls below 70% for two consecutive weeks, we stop.” “If user adoption is under 10% after three months, we stop.” “If the cost per prediction exceeds $5, we stop.” These criteria are not failures. They are freedom. They let you walk away from a project that is not working without losing face. Without them, you will throw good money after bad. And you will know you should have stopped. But you will not. Because no one drew the line.
The Final Question Before You Build
AI is a powerful tool. It is not a strategy. It is not an answer. It is a way of getting to answers faster, provided you have asked the right questions first. The organisations that succeed with AI are not the ones with the most sophisticated models. They are the ones that asked the uncomfortable questions before they wrote a single line of code. They asked about problems, trade-offs, data, accountability, risk, measurement, humans, alternatives, maintenance, and kill criteria. They answered honestly. Then they built.
Be one of those organisations. Ask the questions first. Your roadmap will be smaller, uglier, and more honest than the ones full of buzzwords and timelines. It will also work. And working is better than impressive.
