There are so many amazing use cases for AI. Delegating your RFP to it isn't one of them.
It starts with good intentions. A professional organization decides it's time to refresh its digital presence, overhaul a core business process, or bring in a new technology platform. Someone on the committee suggests using AI to draft the RFP. It'll be faster. More thorough. Nothing left out.
What comes back looks sophisticated — detailed requirements, structured sections, careful consideration of security, scalability, integrations, and emerging technology trends. It reads like an enterprise-grade document. And while that seems great on the surface, it’s often the first symptom of a real problem. The RFP is wildly miscalibrated, and it's about to send the organization and every vendor responding to it on a long, painful, counterproductive journey.
This is happening constantly, across industries and in organizations of every size and type. And it produces the same broken outcome whether organizations did their homework or not.
// FAILURE MODE 1: SKIPPING THE THINKING ENTIRELY
The most common pattern: an organization reaches for AI to draft the RFP before doing any real discovery. It feels productive and efficient. The document writes itself.
In this scenario, what AI produces is a requirements document built from industry averages, best practices pulled from across the web, and every feature category that could conceivably be relevant. It draws on a vast pool of general information without the benefit of nuanced situational awareness. It doesn't know your priorities, your history with past initiatives, your internal workflow and staffing, or where your real scope creep risks lie. But it desperately wants to alleviate your FOMO.
The end result is an overreaching RFP: crushing for vendors to respond to, misaligned to your actual priorities, and priced dead on arrival. The AI wasn't being reckless — it was being thorough, which is exactly what it was optimized to do. Right-sizing requires judgment. Judgment requires context. AI has neither.
// FAILURE MODE 2: REAL DISCOVERY, AI AUTHORSHIP
Here's the more insidious version: the organization actually does the important discovery work. Stakeholders are interviewed. Priorities are debated and ranked. Budget realities are honestly acknowledged. A clear, grounded picture of the actual need emerges from a genuine internal process.
Then someone — trying to be helpful, wanting to address any potential gaps — pumps all of that hard-won clarity into ChatGPT and asks it to pull together the final document.
AI takes the thoughtful inputs and, defaulting to its instinct for comprehensiveness, surrounds them with every adjacent consideration it can generate. Round after round of "want to know what most companies miss?" The nuanced prioritization gets flattened. The deliberate trade-offs disappear. What goes in as a focused brief comes out as a bloated kitchen-sink specification — and the real discovery work gets buried under a layer of AI-generated thoroughness that actively undermines it.
This is the pattern that should give every organization pause. Even when you do the work right, bringing AI in at the wrong stage can undo all of it.
Right-sizing requires judgment. Judgment requires context. AI has neither.
// THE DOWNSTREAM DAMAGE
Regardless of which path produced it, the fallout from an inflated RFP reaches farther than most organizations realize.
Start with the organization itself. The committee approved a document they may not have fully understood or agreed on. The alignment the process was supposed to create never happened — it was papered over by an impressive-looking document that gave everyone permission to stop asking hard questions.
Then the brief lands with vendors, triggering a hemorrhage of unpaid labor. Agencies and service providers spend dozens of hours mapping out deliverables, scoping processes, and writing proposals in response to a specification fundamentally disconnected from the organization's actual situation. Worse, they face a trap with no clean exit: respond to the letter and submit a proposal the organization can't afford; right-size the response to what they actually need and risk disqualification against vendors that played along; or drag the prospective client into exhaustive conference calls to clarify the ask and appear to be “in the weeds” and tedious to work with. There is no honest move that isn't also a gamble.
And underneath all of it: vendors are doing careful, skilled, expensive work in response to requirements that members of the requesting organization may not themselves agree on. The AI authored a document that no human being on either side of the table truly owns. Everyone is responding to a brief that was never really written by anyone.
The selection process that follows is, predictably, a mess. Proposals come back dissimilar and difficult to compare — some responding to the document as written, some trying to reframe it, none quite on target. Without a shared frame of reference, the final selection often comes down to gut feel or familiarity rather than the factors a good RFP process is supposed to surface.
And the damage doesn't stop at selection. A well-crafted RFP builds shared understanding that carries momentum into the early phases of the project itself. A poorly formed one forces the selected vendor to retrace steps with the organization, working back toward a shared understanding of what was actually needed. In that retracing, gaps and misalignments surface — and become scope creep almost immediately. Issues a proper discovery process would have caught before the ink dried become live problems in the first weeks of an engagement that nobody budgeted for and nobody is quite prepared to own.
The AI authored a document that no human being on either side of the table truly owns.
// WHERE AI ACTUALLY BELONGS
AI isn't the villain here, and avoiding it entirely isn't the answer. Used well, AI is an exceptional research and education companion during the discovery phase. It can help your team get smarter faster: understanding the landscape of available solutions, learning the vocabulary of a technology category, and identifying questions you hadn't thought to ask. That's AI doing what it does well — synthesizing general information to help you think more clearly. It's not making decisions. It's not writing requirements. It's not telling you what matters most to your organization, because it doesn't know and can't know.
Once a human-authored, prioritized brief exists, AI may be useful for formatting or tightening language — but not for deciding what belongs in the brief in the first place.
The moment AI is handed the authoring role, comprehensiveness overrides calibration. The human judgment that belongs at the center of a good RFP — the judgment that knows your budget, your team's capacity, your history with past initiatives, and where the real risks lie — gets quietly displaced by a tool that has no stake in whether your project succeeds.
// GET THE SEQUENCE RIGHT
What does good upstream clarity actually look like? It means doing a specific kind of work before a single requirement gets written: defining what success looks like in concrete terms, surfacing constraints and non-negotiables, ranking priorities honestly, and distinguishing must-haves from nice-to-haves. It means getting stakeholders aligned on what the organization actually needs — not what it might theoretically want — and being honest about what it can fund, staff, and sustain.
The organizations getting this right treat that clarity work as a distinct, protected phase — one that lives entirely upstream of the RFP. They collaborate with experienced advisors who can pressure-test a proposed scope before it becomes an official ask. They deliberately leave room in their process for vendors to contribute strategic insight, rather than just demonstrate the ability to respond to a specification. And when they do use AI, they use it to get smarter going in — not to write the output on the way out.
The RFP should be the output of your thinking, not a substitute for it. AI can make that output look very polished, but it can't know if you're thinking about the right things. It will never tell you when the whole premise needs to be reconsidered.
Get the sequence right, and AI becomes a genuine asset in your procurement process. Get it wrong — at either end of the effort spectrum — and even your best discovery work ends up buried under a specification nobody asked for, responded to by vendors who had no good way to engage with it honestly.
Stephan is a trusted strategist and consultant to some of the world's most renowned firms and organizations. Decades of hands-on experience allow him to architect impactful brand and digital experiences that drive business transformation. Stephan also consults on leadership, workflow processes, and M&A transitions.
Continue reading…





