A PE deal for a collision MSO doesn't die in the LOI. It dies in the data room.
Three weeks in, the sponsor asks for cycle time by shop by month for the last 36 months. Your team pulls CCC reports one shop at a time, formats them in Excel, and sends back a spreadsheet that doesn't tie to the P&L. The sponsor asks why. You don't have a good answer. Momentum stalls.
Here's what a ready data room looks like for a collision MSO, and where the most common gaps cost you multiples.
What Sponsors Actually Open First
Not the pitch deck. Not the IM. The raw data:
- Trailing 36-month P&L by shop with consistent definitions across locations.
- KPI time series: cycle time, touch time, severity, supplements per RO, DRP mix, gross profit, AR aging—monthly, by shop.
- Customer (carrier) concentration with revenue and volume splits over time.
- Organic vs acquired growth by vintage.
- EBITDA bridges from reported to adjusted.
If any of these take more than a day to produce, that's a signal your operating data is in worse shape than your pitch deck suggests. Sponsors notice.
The Full Checklist
Financial
- Trailing 36-month P&L by shop, monthly granularity
- Consolidated P&L with elimination entries reconciling to by-shop
- Balance sheet monthly for trailing 24 months
- AR aging by carrier by shop
- AP aging by vendor
- Cash conversion cycle
- Capex history and plan
- Debt schedule
- Real estate: owned vs leased by shop, lease terms, options
- Insurance coverage schedule
Operational KPIs (monthly, by shop, trailing 36 months)
- Cycle time (mean, median, p90)
- Touch time (mean, median)
- Severity (average RO total)
- Supplements per RO and supplement total as % of RO
- Vehicles delivered per month
- RO count opened
- Captured jobs vs lost jobs at write-up
- CSI score (or NPS)
- DRP mix (% revenue by carrier program)
- Parts gross profit %
- Labor gross profit %
- Technician count (productive, non-productive)
- Tech productivity (flag hours vs clock hours)
Carrier / DRP
- DRP agreement copies, current and historical
- Revenue by carrier by shop
- Volume by carrier by shop
- Severity by carrier
- Cycle time by carrier
- Customer satisfaction by carrier
- Any carrier on probation or terminated—reasons and timelines
Workforce
- Headcount by shop by role
- Technician tenure distribution
- Turnover by shop, trailing 24 months
- Comp structures (flag rate vs hourly vs hybrid)
- Open positions and fill times
- Training and certification status (I-CAR, OEM)
Real Estate and Equipment
- Square footage by shop, bays by shop
- Utilization rates (bays in use vs available)
- Equipment inventory by shop with age
- Recent capex and planned capex
Growth and M&A
- Acquisition history: targets, close dates, purchase prices, multiples
- Pre- and post-acquisition KPIs for each target
- Integration timelines and issues
- Organic revenue growth by shop by year
- Pipeline of active M&A targets
Systems and Data
- List of systems in use: CCC version, DMS, accounting, time-clock, parts procurement
- Variance across shops (which shops use what)
- Data warehouse or lack thereof
- Reporting cadence and distribution
- Any recent system changes or planned migrations
Where Deals Actually Slip
After enough transactions, the same patterns surface as the ones that kill momentum or cost multiples:
Inconsistent definitions across shops. Shop A calls "cycle time" one thing, Shop B calls it another. When the sponsor tries to roll up, nothing ties. Sponsors read this as operational immaturity and price accordingly.
No by-shop visibility. Consolidated numbers look fine. The sponsor asks for by-shop and gets 30 Excel files with different structures. The sponsor's analyst spends a week reformatting, and the seller loses 10 days of momentum.
KPIs that can't be reproduced. The pitch deck shows a 5-day cycle time trend. The data room data produces a 7-day trend. Neither is wrong—they're computed differently. The explanation takes a meeting. The trust takes longer to rebuild.
Missing longitudinal data. Sponsors want three years of monthly history. You have 18 months because that's when the data warehouse was built. Pre-warehouse data requires reconstructing from exports, and the numbers rarely match what was reported to the board at the time.
Unexplained outliers. One shop had a 4-week cycle time spike in Q3 2024 because a key tech walked out. It's explainable; it's not explained in the data. Sponsors assume the worst until you tell them otherwise.
What Ready Looks Like
A collision MSO whose data room gets sponsor praise (and better terms) has a few things in common:
- A single canonical source of truth. Usually a warehouse with dbt models and documented definitions, not Excel.
- Monthly metric snapshots preserved. The numbers reported at a board meeting in March 2023 can still be reproduced in April 2026.
- By-shop reporting that reconciles to consolidated. Add up the shops, get the total.
- KPI definitions documented. Not "cycle time" — "calendar time from production-start to customer-pickup, computed from clock data."
- Narrative annotations on anomalies. Spikes and dips are labeled with context.
This level of readiness is rarely built in 90 days. Sponsors can tell when a data room was reverse-engineered for the deal.
The ROI
Two collision MSO deals with otherwise identical profiles will trade at different multiples depending on data room quality. We've seen the spread quoted as 0.5–1.5 turns of EBITDA on mid-sized deals.
That's millions of dollars sitting in the gap between "we know this" and "we can prove it."