ECS Tuning had a sophisticated in-house claims management process. They had a dedicated team. They filed claims consistently. Internal reports showed steady recovery numbers.
Then we ran an independent validation assessment.
What we found: eligible claims getting missed and claims being filed without adequate documentation. To the tune of more than $200,000 per year.
This is the gap between perceived performance and actual performance. It exists in almost every enterprise shipping operation I analyze. The gap is almost always larger than expected.
Read the full case study here.
The Problem with Internal Benchmarks
When enterprise shippers tell me their claims recovery is good, they're measuring against one of two baselines.
First scenario: They have an existing claims process that appears functional. Recovery rates look normal compared to their own historical performance. They measure in hard dollars per month or quarter. In the best cases, they measure recovery relative to shipping spend.
What they don't have: external comparison data. They can't benchmark against best-in-class claims filers because that data doesn't exist inside their four walls.
Second scenario: They waived their rights to file claims in exchange for a carrier rebate. They no longer bear the burden of filing. The rebate shows up as a line item. Problem solved.
What they don't know: carriers make high margins on these rebate deals. I've seen rebates cover one-tenth to one-fifth of total recovery eligibility. The math is worse than it appears.
When I run an independent benchmark for shippers who took the rebate deal, the typical reaction is disbelief. I have to show them the supporting data. Once they understand the gap, they typically work with their carrier to remove the rebate and restore claims filing.
What Independent Validation Actually Reveals
The validation process takes days, not months. The shipper connects their carrier data streams to our system. We support ten domestic parcel carriers and counting.
Our system runs two automated scans.
First scan: We baseline existing claims activity and recovery rates. We flag claims stuck in process that need documentation to move forward. These are claims already filed but stalled due to missing information.
Second scan: We evaluate all shipments to identify eligible claims that should have been filed but weren't. This is where the real gap emerges.
Missed claims are often greater than what's being filed today.
One of my favorite examples: a customer filing claims only when surfaced by their end customers. About 250 claims per month on 400,000 shipments. Our data uncovered they were overlooking 90% of their eligible claims. Around 3,000 claims per month. This represented a $300,000 per month opportunity.
We turned claims on. By the end of month two, the customer had recovered more than $1 million.
The Approval Rate Gap
The gap between claims filed and claims approved reveals process quality.
For loss claims where there's no carrier proof of delivery, I see 90% or higher success rates with well-organized shippers. I see greater than 10% success rates with shippers who aren't set up properly or not submitting proper documentation.
Most enterprise shippers benchmark success against their own historical payouts. A 10% success rate may appear good because they have no basis for comparison otherwise.
This is the awareness gap. Internal reports can't surface what you don't know to measure.
Things that impact filed versus approved rates: loss versus damage claim ratio (loss claims have significantly higher success rates if filed properly), business setup complexity (multi-brand enterprises or multi-channel fulfillment operations must organize and structure claims in accordance with strict carrier policies), and documentation completeness.
The documentation requirements at each carrier are severely lacking in public guidance. This is why shippers find value partnering with specialists. We help them unlock their highest potential approval rate by applying best practices we see and deploy at the best claims enterprises.
The Strategic Timing Window
I recommend running independent validation 90 days before carrier contract renewal.
From a planning standpoint, 90 days provides adequate time for negotiation based on typical turnaround times from first conversation through close. 120 days works. Less than 90 days doesn't provide enough time to work through important deal points before the agreement lapses.
Claims rates inform true carrier performance. If you understand the true rates of service failure and can articulate these back to the carrier, it creates leverage in negotiations.
Beyond the money recovered, independent benchmarking data unlocks positioning that shippers wouldn't have otherwise. You're not just recovering dollars. You're establishing data-backed performance baselines that shift the negotiation dynamic.
The Data Decay Problem
Historical data doesn't predict current performance reliably.
With claims, you have hard eligibility windows. If it takes too long to assemble data or identify claim eligibility, you miss the filing window and lose recovery as a result. Both major carriers require refund requests filed within 15 days of expected delivery date.
Claims performance data is inconsistent. It ebbs and flows during the year based on carrier volume strain and package volume. Q3 data doesn't predict Q1 performance accurately.
Tracking data expires. Static snapshots age quickly. This is why continuous data connectivity matters more than periodic audits.
Customer 2 Example
Mid-pilot checkpoint with Customer 2, a scaled retailer, revealed what independent validation surfaces at scale.
We presented pilot results: $441,000 recovered in 14 days across 4,100 claims. Another $206,000 pending from 1,400 additional claims.
The complexity and trade-offs of their current UPS rebate structure became clear. We recommended shifting value from rebates to upfront discounts. The rebate was covering a fraction of actual recovery eligibility.
Technical challenges emerged around including accurate product values and descriptions in claims. Long-term risks of using generic values: potential for increased claim denials if not addressed within months.
This is what validation looks like at enterprise scale. The gaps compound. The opportunities multiply. The data reveals patterns invisible from inside the operation.
The Decision Framework
I run free assessments for everyone because the lift to assess is incredibly light.
The "we're actually good" minority would surface if the eligibility gap doesn't exist (they're filing all eligible claims) and there are no stuck claims requiring documentation.
So far, I haven't found a single customer in this minority.
Any positive resolution rate below 80% for loss claims without proof of delivery is not optimized. That's the line. Below that threshold, intervention is necessary.
Why Manual Processes Fail at Scale
I remember looking at a spreadsheet from an enterprise shipper filing more than 2,000 claims per week with UPS.
Not only was much of the static data inaccurate (broken vlookup formulas, incorrect mappings), but getting status updates from carriers in bulk was nearly impossible. The volume of accounts used and the difficulty accessing information via carrier portals created friction.
Virtually every status was incorrect after a week. The team faced huge manual effort trying to download reports and map them into their spreadsheet.
It was a nightmare.
I assumed operations business systems were designed to work cleanly at scale and interact with related business systems. What I found in reality: most enterprise shippers lack this continuity and get stuck managing claims using spreadsheets.
Manual coordination will always fail at scale. Organizations cycle through temporary claims initiatives that succeed briefly under focused attention, then collapse when attention shifts. This pattern persists because underlying data connectivity infrastructure never gets built. Only renewed manual effort gets applied.
The Activity Versus Outcome Problem
Activity metrics include time spent managing claims and claims-related processes, claims filed, and other output metrics.
Recovery metrics—dollars landed in the bank account—is where the rubber meets the road. The real outcome metric to marry with output and activity metrics.
The outcome metric must justify the activity.
Point-solution vendors optimize for visible activity metrics (claims filed, tickets opened) rather than ultimate outcome metrics (payments received, reconciliations completed). Upstream intervention requires less infrastructure investment and produces faster demonstration of activity.
This creates a market full of partial solutions.
We boost activity metrics (and shift them to our software from customer teams) and boost outcome metrics by lifting recovery. End-to-end orchestration matters. Filing claims without confirming payment reconciliation is incomplete resolution.
The Carrier Counterparty Reframe
Claims are pain points for carriers and enterprise shippers both.
Removing friction in the middle—making claims filing easier, ensuring claims that hit carriers' desks are valid and documentation is complete—benefits both sides of the equation.
Carriers benefit from reduced garbage-claim volume and complete-information submissions. By treating carriers as pure adversaries, the industry fails to recognize alignment opportunity. Better input quality reduces carrier processing cost while improving shipper recovery rate.
This reframe changes what becomes possible in renewal negotiations. Removing friction removes a potential blocker in negotiations.
What Happens Next
When you're telling a VP of Operations their team missed $300,000 in quarterly recovery, or showing a sophisticated operation like the customers mentioned here, there's $7.7 million in annual savings on the table, you're walking a tightrope.
We ensure them this isn't money they could've unlocked using anyone else. We're the only end-to-end claims solution on the market.
It's exciting for them when they can visualize the results. One of these team members was promoted shortly after we kicked off, possibly as a result of this discovery.
The gap between perceived and actual performance exists in your operation. The question is whether you're measuring against internal baselines or external reality.
Independent validation takes days. The data either confirms you're in the rare minority operating at best-in-class levels, or it reveals the gap.
What would you find if you ran the assessment today?



















