Why Australian Enterprises Are Ditching RFPs for AI Projects
The request for proposal has been a cornerstone of enterprise technology procurement for decades. It works reasonably well for known quantities — ERP implementations, network upgrades, managed services with clear specifications. But for AI projects, the RFP is increasingly being recognised as the wrong tool for the job. Australian enterprises are quietly abandoning it, and the reasons say a lot about how AI adoption actually works in practice.
The Specification Problem
An RFP requires you to define what you want before you ask someone to build it. That’s the fundamental issue. Most organisations approaching AI for the first time — or even the third time — don’t have a precise understanding of what’s achievable, what data quality issues will surface, or how their specific operational context will shape outcomes.
A procurement team might write an RFP for “an AI-powered demand forecasting system” and include dozens of pages of technical requirements. But the critical unknowns — whether the company’s historical data is clean enough, whether the forecasting granularity they want is statistically viable, whether the model will need retraining weekly or quarterly — can’t be answered on paper. They can only be answered by building something and testing it.
This isn’t a failure of procurement teams. It’s a structural mismatch between a process designed for defined deliverables and a technology category where the deliverable often isn’t clear until you’re halfway through development.
Speed and the Competitive Window
The RFP cycle in large Australian enterprises typically runs three to six months from initial drafting to vendor selection. Add another month or two for contract negotiation. By the time work begins, the competitive window for many AI applications has narrowed significantly.
This matters because AI capabilities are moving fast. A model architecture that was state of the art when the RFP was drafted may be superseded by the time the contract is signed. Requirements written in Q1 may not reflect the possibilities available in Q3. Companies that move faster — often mid-market firms without the same procurement overhead — are shipping working AI products while larger competitors are still evaluating responses to tender.
The Australian Information Industry Association has noted this tension in its advocacy around government procurement reform, arguing that rigid tender processes are actively slowing AI adoption across both the public and private sectors.
The Shift to POC-First
What’s replacing the RFP isn’t chaos. It’s a more structured form of experimentation. The pattern emerging across Australian enterprises looks roughly like this: identify a business problem, engage a small number of credentialed vendors for a paid proof of concept lasting four to eight weeks, evaluate results against real data and real performance metrics, then decide whether to scale.
One approach gaining traction is what an Australian AI consultancy describes as “paid discovery” — a short engagement to validate assumptions before committing to a full build. The logic is straightforward: spend a relatively small amount upfront to de-risk a much larger investment. It’s cheaper than a failed six-month project, and it produces tangible evidence that an AI approach either works or doesn’t.
This POC-first model also changes the vendor relationship dynamic. Instead of evaluating proposals on paper — where every vendor looks impressive — the enterprise evaluates actual output. Did the model perform? Was the team responsive? Could they work with messy, real-world data rather than clean demo datasets?
What Procurement Teams Are Doing Instead
The shift doesn’t eliminate procurement rigour. It redirects it. Organisations are still doing due diligence on vendor credentials, data security practices, and intellectual property terms. But they’re doing it as part of a lighter-weight engagement framework rather than a monolithic RFP process.
Some companies are establishing pre-approved vendor panels for AI work, modelled loosely on how consulting panels operate. Others are running competitive POCs — two or three vendors tackling the same problem in parallel, with the winner earning the full engagement. According to MIT Sloan Management Review’s research on AI procurement, organisations that adopt iterative evaluation approaches report higher satisfaction with vendor outcomes than those using traditional tendering.
Finance and legal teams have had to adjust. The concept of spending money on an experiment without a guaranteed deliverable sits uncomfortably with traditional capital expenditure frameworks. But the alternative — spending far more on a fully scoped project built on untested assumptions — is increasingly seen as the riskier option.
The Broader Implication
The decline of the AI RFP reflects something larger: Australian enterprises are starting to treat AI less like an IT infrastructure purchase and more like product development. That means accepting uncertainty, iterating quickly, and measuring outcomes rather than outputs.
It’s not that the RFP will disappear entirely. For mature AI applications with well-understood requirements — deploying a proven computer vision model on a production line, for instance — a traditional procurement approach still makes sense. But for anything exploratory, anything where the question is “can AI solve this problem?”, the answer increasingly starts with a proof of concept, not a 60-page tender document.