Back to Thinking?
Image unavailable
Decision intelligence implementation insights

The Supply Chain Decision Debt: How Deferred Planning Choices Compound Into AI Failure

Andrew Pemberton

Andrew Pemberton

Co-founder & Development Director

May 1, 2026
9 min read

Most mid-market distributors rush to AI deployment without auditing what they're actually deciding. A Nottinghamshire food wholesaler spent £85,000 on demand forecasting AI that sat dormant because buying decisions existed only in one person's head. The pre-implementation audit—decision inventory, assumption mapping, rule documentation—is the gate that determines whether AI works or sits unused.

Introduction

A Nottinghamshire food wholesaler spent £85,000 on demand forecasting AI. Three months in, the system sat dormant because nobody could explain how current buying decisions were actually made. The buyer had 15 years of supplier relationships and product knowledge encoded in her head, none of it documented. The AI had transaction history but no context for why certain orders happened when they did.

This is the pre-implementation audit gap. Most mid-market distributors rush to AI deployment without auditing what they're actually deciding. They assume everyone knows how decisions work because they've been making them for years. In reality, operational decisions are a tangle of undocumented rules, personal preferences, and buried assumptions that only surface when you try to automate them.

Poor decision frameworks mean AI inherits bad logic. The technology works fine. The foundations don't. This article covers the diagnostic phase distributors skip: decision inventory, assumption mapping, and rule documentation. The audit that determines whether AI will succeed or sit unused.

Why Mid-Market Distributors Skip the Audit Phase

Competitors are deploying AI. Board members read about automation. Sales teams want better pricing tools. The instinct is to buy the technology and start training models immediately.

The assumption underneath is always the same: "We know how we decide." Pricing, fulfilment routing, inventory allocation—these feel understood because they happen every day. Teams believe the rules are obvious. In practice, they're not. Across our implementations, the average AI readiness score is 5.6 out of 10 (WithPraxis client data, 2025). The gap isn't technical capability. It's decision clarity.

A foodservice distributor in the Midlands took three days to change a single price. The spreadsheet got updated, emailed for approval, queried, corrected, re-approved, and uploaded. Nobody could articulate the pricing rule—it was "whatever feels right for the customer and the margin." When they deployed pricing AI, the system had no coherent logic to learn from. It saw random adjustments and couldn't extract a pattern.

Skipping the audit creates three problems. First, AI gets trained on bad logic and replicates it at scale. Second, decision latency persists because the real bottleneck—unclear rules—remains unaddressed. Third, margin leakage continues because nobody identified where the decision framework was broken. The false economy is obvious: spending £80,000 on technology to automate a process you haven't defined.

The Hidden Assumptions in Your Current Decisions

Every operational decision contains buried assumptions. "We always prioritise this customer." "We never discount below 12%." "We route from the nearest depot unless it's Tuesday." These rules exist, but they live in email threads, shift handovers, and the heads of people who've been there since 2009.

An industrial distributor in Yorkshire ran manual fulfilment routing based on driver knowledge. Each driver knew which customers wanted early delivery, which sites had restricted access, which orders could combine. None of this was written down. When they tried to deploy route optimisation AI, the system produced technically optimal routes that violated every unstated customer preference. Drivers rejected the new routes. The project stalled.

AI deployment without surfacing these assumptions leads to two outcomes. Either the AI replicates bad logic—because it learns from historical behaviour that encoded all the hidden rules—or it breaks established business rules that nobody documented. Both are expensive. The first perpetuates inefficiency. The second destroys customer relationships.

The audit finds these before technology arrives. A structured decision inventory surfaces the assumptions, tests whether they're still valid, and documents the ones that matter. The Yorkshire distributor eventually ran the audit, found 23 routing constraints that drivers had been working around, and built those into the AI logic. Route efficiency improved 40% because the technology respected the real-world constraints instead of ignoring them.

Decision Mapping: The Audit Framework

The decision audit is a structured process, not a consulting report. It produces a working document that becomes the blueprint for AI deployment. Five steps: inventory all operational decisions, document decision rules, identify data dependencies, surface assumptions, assess decision ownership.

Start with the inventory. Most mid-market distributors make 40-60 operational decisions daily: pricing adjustments, fulfilment priority, inventory allocation, credit approval, supplier selection, markdown timing. List them. For each decision, document who makes it, how often, what data they use, and what the current rule is—even if the rule is "manager's gut feel."

Next, document decision rules. This is where hidden logic surfaces. A fashion distributor thought they had a clear markdown policy. The audit found five different rules across three buyers: one marked down 20% at eight weeks, another held stock for next season, a third discounted based on available storage space. No written policy existed. The AI couldn't learn a coherent strategy because no strategy existed.

Identify data dependencies. Which decisions require real-time data? Which can work from yesterday's numbers? A construction materials distributor discovered their route planning used customer location data that was 18 months out of date—half their sites had moved. The AI would have optimised routes to the wrong addresses.

Assess decision ownership. Across our implementations, 60-80% of operational decisions have unclear ownership (WithPraxis client data, 2025). Pricing is "shared responsibility" between sales and finance. Fulfilment priority is "whoever shouts loudest." The audit assigns accountability. A proper Decision Mapping exercise takes four to six hours with the right people in the room. The output is a decision architecture: who decides what, using which data, following which rules.

What the Audit Reveals (And Why It Matters)

The audit produces five outputs: decision inventory, rule documentation, assumption map, data quality assessment, and decision ownership clarity. Each prevents a specific failure mode.

The decision inventory reveals volume. A foodservice distributor thought they made "a few pricing decisions weekly." The audit found 340 price changes per month, most triggered by commodity price shifts. The volume meant manual pricing was unsustainable—but nobody had quantified it before. The inventory made the case for automation.

Rule documentation surfaces inconsistency. The construction distributor had no written policy on delivery windows. Each depot manager set their own. The audit found that one depot promised next-day delivery, another required 48 hours, a third worked on a weekly cycle. Customers in overlapping territories got different service levels for no reason. The AI deployment forced rule standardisation, which improved customer satisfaction before the technology even launched.

The assumption map finds buried constraints. The fashion distributor assumed markdown decisions were purely financial. The audit revealed that storage capacity drove half the clearance timing—buyers marked down stock early if the warehouse was full, regardless of demand. The AI needed to factor physical space into markdown recommendations, not just margin targets.

Data quality assessment identifies gaps before they break the AI. A building materials distributor had accurate transaction data but incomplete product dimensions. Route optimisation needs weight and volume to calculate vehicle capacity. The missing data would have caused the AI to plan impossible loads. The audit caught it before deployment.

Decision ownership clarity prevents the "AI made me do it" problem. When decisions have clear owners, AI becomes a tool they use. When ownership is vague, AI becomes a scapegoat. The audit assigns accountability so that when the pricing system recommends a 15% increase, someone is responsible for approving or overriding it.

This is where you discover whether your decision framework is ready for AI, or whether it needs restructuring first. Most need restructuring. That's not a failure. It's the point of the audit.

From Audit to Implementation: The Revenue Protection Angle

The audit is a revenue-protecting investment, not overhead. Our clients achieve 25-40% improvement in key operational metrics within six months of deployment (WithPraxis client data, 2025). That improvement depends on decision clarity. Without the audit, AI deploys into uncertainty, adoption stalls, and outcomes disappoint.

The ROI calculation is straightforward. The audit costs four to six weeks and typically £15,000-£25,000 for a mid-market distributor. It prevents months of failed AI deployment and protects margin recovery. The foodservice distributor who spent three days changing prices recovered £180,000-£240,000 annually after deployment—but only because the audit identified the real bottleneck first. The problem wasn't the technology. It was the approval workflow and the lack of documented pricing rules.

Skipping the audit means deploying AI into a decision framework you don't fully understand. The system makes recommendations nobody trusts. Users override the AI constantly because it conflicts with undocumented business rules. Adoption stays low. The investment sits unused. Six months later, the project gets quietly shelved and everyone agrees "the AI wasn't ready."

The audit prevents this. It surfaces the decisions that matter, documents the rules that exist, and identifies the gaps that need filling before technology arrives. It's not a report that sits in a drawer. It's the blueprint for implementation.

A Leicestershire industrial distributor ran the audit and discovered that 60% of their fulfilment decisions were already rule-based—they just weren't documented. The audit captured the rules, tested them against historical data, and found that formalising them would save 12% in fulfilment costs without any AI. They implemented the rules first, then layered AI on top for the remaining 40% of decisions that required optimisation. Total fulfilment cost reduction: 22%.

The audit is not overhead. It's the difference between AI that works and AI that sits unused. It's the gate that determines whether your £80,000 technology investment delivers £200,000 in margin recovery or gets abandoned after three months.

Conclusion

Decision clarity precedes technology clarity. Most mid-market distributors are ready for AI in the abstract—they have the data, the systems, and the budget. They're not ready for the right AI because they haven't audited what they're actually deciding.

The audit is the gate. It reveals whether your operational decisions are documented, whether your rules are consistent, and whether your team knows who owns which choices. Without it, AI inherits bad logic, replicates inefficiency, and fails to deliver the outcomes you paid for. With it, AI becomes a tool that makes your existing decision framework faster and more consistent. Understanding your decision framework before deployment is not optional. It's the foundation every AI project needs.

Learn more about Decision Mapping and Architecture.

Common questions

How does a lack of documented decision logic impact the performance of pricing AI?

AI systems trained on inconsistent or undocumented pricing rules fail to identify coherent patterns and often replicate existing margin leakage at scale. Without a defined framework, the technology interprets manual adjustments as random noise rather than strategic logic. This results in a system that cannot automate pricing effectively because the underlying business rules remain unarticulated.

What are the primary risks of deploying fulfilment AI without first auditing operational assumptions?

Deploying fulfilment technology without an audit often leads to the creation of technically optimal routes that violate unstated customer preferences or site constraints. This misalignment frequently causes drivers and operational staff to reject the system, stalling the project entirely. An audit surfaces these hidden constraints, such as restricted site access or delivery windows, ensuring the AI respects real-world requirements.

Which specific steps should a distributor take to complete a structured decision inventory?

Distributors must list every daily operational decision, from inventory allocation to credit approval, and document the frequency, data sources, and current rules for each. The process requires identifying who owns the decision and whether the logic is based on written policy or individual intuition. This inventory serves as the essential blueprint for configuring AI logic and assigning accountability across the organisation.

Why do mid-market distributors frequently experience high decision latency despite investing in automation?

Decision latency persists when distributors automate a process without addressing the underlying bottleneck of unclear rules and complex approval chains. Technology alone cannot resolve delays caused by spreadsheets being emailed for manual queries and corrections. High latency is a symptom of a broken decision framework rather than a lack of technical capability.

Themes

AI Implementation StrategyCommerce Operations Intelligence
Andrew Pemberton

Andrew Pemberton

Co-founder & Development Director

Andrew is a co-founder of WithPraxis. With 25 years in commerce and technology development, he leads the build side of every engagement, turning AI strategy into working systems that fit how mid-market businesses actually operate. He has delivered projects across distribution, manufacturing, and retail for businesses from regional independents to national operators.

Connect on LinkedIn

Related Articles

Agentic Pricing Intelligence: When Custom Models Set Prices Autonomously

Decision intelligence implementation insights

Agentic Pricing Intelligence: When Custom Models Set Prices Autonomously

Most B2B distributors take three days to change a price. By the time it's live, the margin opportunity has passed. Autonomous pricing agents compress this cycle from days to minutes — but only if governance is built in from the start. Without it, you hand control to a system that optimises for volume while destroying margin.

May 1, 20269 min readRead article
Decision Ownership in Agentic AI: Who's Responsible When the System Decides?

Decision intelligence implementation insights

Decision Ownership in Agentic AI: Who's Responsible When the System Decides?

A dynamic pricing system adjusts margins autonomously across 8,000 SKUs. A high-volume customer's pricing drops 4.2% overnight. Nobody approved it. Nobody noticed until the monthly margin review. £52,000 in lost contribution over six weeks. This is the liability problem with agentic AI. The system made the decision. The algorithm followed its training. But when the finance director asks who authorised a £50K margin giveaway, the answer is nobody. The system decided autonomously, and the governance framework didn't exist to prevent it. This article examines practical frameworks for managing financial and operational liability when autonomous systems make decisions that impact customer relationships, inventory, or pricing without human approval. It covers real-world failure modes, liability exposure, decision governance structures, and the trade-offs between autonomy and oversight.

Apr 27, 20269 min readRead article

Was this article useful?