Most contractors think their business is improving when it isn't.
Not because they are not looking at the numbers. Because they are looking at the wrong comparison. They are measuring this month against last month and reading the direction of that arrow as a signal about the health of the business. It is not. It is a signal about the calendar.
Home improvement is a seasonal industry. Demand rises and falls in patterns that repeat year after year across every vertical. The variation between peak months and trough months is significant and consistent. When you measure April against March, you are not measuring whether your business improved. You are measuring whether spring arrived. It did. It always does.
If you are not comparing against last year, you are not measuring performance. You are measuring the calendar.
The consequence of this is not abstract. Budgets get allocated to sources that look good in strong months but do not hold. Sales teams get blamed for problems that originate upstream in lead quality or timing. Growth gets mistaken for efficiency. And the decisions that follow from those misreadings compound over time into a marketing operation that is always reacting to recent months rather than building toward a clear trend.
Why Month-Over-Month Is the Wrong Baseline
Month-over-month became the default comparison because it is fast and available. Every platform reports it. Every dashboard surfaces it. It requires no additional data architecture to produce. You compare this month's number to last month's number and the result is immediately legible.
The problem is that it conflates two completely different signals: what the business did differently, and what the season did automatically. In most months, the seasonal effect is larger than the operational effect. A strong April compared to a weak March is not evidence that anything changed in the business. It is evidence that the demand cycle moved, as it does every year, in every market, for every contractor in the category.
Month-over-month reporting cannot separate those signals. It presents a single number that carries both, and invites the reader to draw conclusions that the data does not support. The 18% revenue increase from March to April looks like momentum. It may be entirely seasonal. You cannot tell from the comparison itself.
The Comparison That Isolates Signal
The baseline that separates business performance from seasonal variation is the same period in the prior year. April this year versus April last year. Q2 this year versus Q2 last year. The rolling twelve months ending this month versus the rolling twelve months ending this month last year.
When you compare equivalent periods, the seasonal effect cancels out. Both periods experienced the same calendar dynamics. What remains after the seasonal effect is removed is the actual change in business performance. A business that grew retained revenue 12% April-over-April is a business that genuinely improved. A business that grew 18% month-over-month in April may have simply experienced the same seasonal lift that every contractor in the market experienced.
The same-period comparison does not eliminate all noise. Market conditions shift year over year. A particularly strong or weak housing market in a given period creates variation that the prior-year comparison does not fully control. But it eliminates the largest and most consistent source of noise in month-over-month reporting, which is the seasonal cycle, and it does so without requiring any additional data beyond what most operations already collect.
Building the QoQ Comparison
A proper QoQ comparison does not live at the top line. It lives inside the funnel. Total revenue QoQ tells you whether the business improved. Source-level QoQ tells you why, and which parts of the operation are driving the improvement versus which are masking it.
The framework tracks four layers for each lead source, compared against the equivalent prior period:
How many leads did this source deliver in the current period versus the same period last year, at the same or similar spend level? Changes in lead volume at constant spend reflect changes in source performance. Changes in lead volume at higher spend reflect budget allocation decisions. You need to know which you are looking at before drawing conclusions.
What percentage of leads from this source set an appointment, and what percentage of those appointments ran and closed? Set rate and close rate tracked together at the source level show whether the funnel is getting more or less efficient at converting the leads this source produces. Improvement in both, at the same lead volume, is genuine operational improvement. Improvement in one while the other declines requires investigation.
What percentage of closed jobs from this source cancelled before completion? Cancel rate is the metric most QoQ comparisons omit, and its omission creates the most distortion. A business that improved its close rate QoQ while also increasing its cancel rate may have produced less retained revenue, not more. Cancel rate is the integrity check on the conversion layer.
What retained revenue did this source produce in the current period versus the prior year equivalent? This is the number that connects all the upstream metrics to an actual business result. Retained revenue per source, per dollar spent, compared against the same period last year, is the clearest available signal of whether a source is genuinely performing better or worse.
What the QoQ Comparison Reveals
When source-level QoQ data is assembled consistently, several patterns emerge that month-over-month reporting systematically conceals.
Source drift is one of the most common and least visible. A source that performed well last year may be declining this year at the same spend level, but the decline is gradual and gets absorbed into seasonal variation in month-over-month reporting. A QoQ comparison makes it visible because the seasonal effect is controlled. The source is being evaluated against an equivalent period, and if it is producing less retained revenue per dollar at that equivalent period, the trend is real.
Funnel degradation is another pattern that month-over-month reporting obscures. A business can grow lead volume QoQ while its funnel efficiency declines. Set rate drops slightly. Close rate holds. Cancel rate creeps up. Total revenue looks flat or slightly positive because volume is carrying the number. The underlying unit economics are getting worse. A QoQ analysis at the source level will show the degradation in the conversion and integrity layers before it becomes visible at the outcome layer.
Mix shift effects are particularly important for businesses that actively change their lead source allocation. When a business shifts budget toward higher-intent sources, aggregate metrics improve for reasons unrelated to operational performance. A QoQ source-level comparison separates the mix effect from the performance effect, showing which sources improved and which declined independent of the allocation change.
The Rolling Twelve-Month View
Quarterly comparisons are useful for identifying trends within a season. Rolling twelve-month comparisons are more powerful because they eliminate both monthly and quarterly seasonality simultaneously.
The rolling twelve months ending this month, compared to the rolling twelve months ending this month last year, captures a full seasonal cycle on both sides of the comparison. It is the cleanest available signal of whether a business is improving or declining, stripped of calendar noise at both the monthly and quarterly level.
For a home improvement contractor, the rolling twelve-month view should track retained revenue by source, cost-per-acquired-revenue by source, and cancel rate by source, compared against the prior year equivalent rolling window. A source whose cost-per-acquired-revenue improved year-over-year in the rolling twelve-month view is a source that genuinely became more efficient. A source whose cost-per-acquired-revenue worsened is a source that deserves scrutiny regardless of how it performed in any individual month.
A single strong month is not a trend. A rolling twelve months of improvement, measured against the prior year equivalent, is a trend. One of those is a reason to make decisions. The other is a reason to keep watching.
Why This Comparison Is Rarely Made
The reason most contractors default to month-over-month is not that same-period-prior-year is conceptually difficult. It is that it requires data from two time periods, connected at the source level, with consistent field definitions across both periods.
Most reporting systems make it straightforward to pull this month's numbers. Pulling last April's numbers at the same source-level granularity, with the same definitions, and running them in parallel requires a more deliberate data architecture than most operations have built. The result is that contractors make month-over-month decisions not because it is the right comparison, but because it is the available one.
The cost of that default is not visible in any single month. It accumulates over time in the form of misallocated budget, misattributed performance, and missed signals that a properly structured comparison would have surfaced months earlier.
What Consistent QoQ Tracking Produces
A business that tracks same-period-prior-year metrics consistently over several years develops something most home improvement operations do not have: a clear, noise-free picture of whether the business is actually improving and which specific changes in source mix, funnel management, or operational process drove that improvement.
It can distinguish between a good year and a strong market. It can identify which operational changes produced genuine improvement and which produced temporary lifts that normalized in the following quarter. It can make budget decisions based on multi-year performance trends rather than recent months that may be dominated by seasonal variation.
That clarity does not require different data than most operations already collect. It requires the existing data to be read against the right baseline, consistently, at the source level, over time.
Month-over-month tells you what moved. Same-period-prior-year tells you what actually got better. Those are not the same question, and they do not produce the same decisions.
Revenue Intelligence · Verisyn HQ
See your business performance against the comparison that actually isolates signal from seasonal noise.
Show Me My QoQ Performance →Also from Verisyn HQ
How to Read a Lead Source the Way an Investor Reads a Portfolio → How to Calculate Your True Cost-Per-Acquisition by Lead Source → The Cancel Rate Problem Home Improvement Contractors Are Measuring Wrong →Also from Remodelspeak
The close rate number your sales manager is watching is probably wrong →