Sales teams lose deals not because they lack data, but because they don’t know which reports to pull, when to pull them, or what to do with what they find. Most CRM platforms ship with dozens of report templates. Very few sales managers use more than three or four of them consistently, and even fewer have structured their reporting around actual business decisions rather than activity metrics that look reassuring but drive nothing.
This guide covers how to build CRM sales reports that actually work: which report types to set up, which metrics belong in each one, how to structure a forecast dashboard, and where most teams go wrong.

Table of Contents
What CRM Sales Reports Are For
Not all CRM reports serve the same purpose. Before building anything, it helps to separate two categories that often get conflated.
A pipeline report shows you the current state of your deals: where they sit, how long they have been there, and what they are worth. It answers questions about right now. A forecast report applies probability weighting to that same data and projects a revenue number for a future period. Treating these as the same thing is one of the most common reporting mistakes sales managers make. If your pipeline coverage looks healthy but your weighted forecast falls short, you have a qualification problem, not a volume problem.
The third category most teams under-use is the rep performance report, which measures the inputs (calls, emails, demos, proposals) alongside the outputs (deals won, average deal size, win rate). This one matters most for coaching. Pipeline reports tell you the state of the business. Rep performance reports tell you why it got there.
Core Report Types to Set Up in Your CRM
Every sales team needs a different mix, but most will build around the same four foundational reports. Each one answers a specific question.
Stage-by-Stage Pipeline Report
This is the baseline view. It shows how many deals sit in each stage of your pipeline and the combined value at each stage. Run it weekly. What you are watching for is imbalance: a bloated proposal stage with no movement, an unusually thin discovery phase suggesting your prospecting effort has dropped off, or deals clustering at the early stages with nothing approaching close.
Keep your pipeline stages between four and seven. More than that creates inconsistent data entry and makes the stage-by-stage report meaningless. Every stage should have a clear exit criterion so reps know exactly when a deal moves forward. Without those definitions, deals drift upward to make pipelines look healthier than they are.
Rep Performance Report
This report breaks down activity and outcomes by individual rep: deals owned, stage distribution, win rate, average deal size, and time in pipeline. Pull it monthly for formal reviews and weekly for coaching conversations.
The useful version of this report compares activities to results rather than results to quota alone. A rep with a 40% win rate who runs ten discovery calls a month has a very different coaching conversation than a rep with a 40% win rate who runs thirty. Both hit the same output metric, but the first rep is more efficient. The second rep may have a qualification problem. Your pipeline data alone won’t surface that distinction.
Forecast Report
A forecast report applies a win probability to each open deal and generates a projected revenue number for a defined period. Most CRM platforms let you set stage-level probabilities or let reps assign probabilities manually. Both approaches work, but they require different kinds of oversight.
Stage-based probabilities are consistent and easy to report on, but they can mask deal-specific risks. A proposal sent to a champion with no budget authority is not the same as a proposal sent to a signed-off buyer, but they sit in the same stage. Rep-assigned probabilities capture that nuance, but they are subject to optimism bias and need manager review before the forecast goes upstream.
The most useful forecast view shows three numbers side by side: total pipeline value, weighted pipeline value, and committed deals (those the rep has explicitly called as closing this period). The gap between weighted value and committed deals tells you how much confidence your team has in their own forecast.
Stalled Deals Report
Deals that stop moving through your pipeline are revenue sitting idle. A stalled deals report surfaces opportunities that have not progressed in a defined number of days, which you set based on your average sales cycle. If the typical deal takes 45 days to close, flagging anything with no stage movement in 21 days gives managers enough lead time to intervene.
This report is high leverage because stalled deals rarely die loudly. They just stop moving. Without an automated flag, they get reviewed last in pipeline meetings, if at all.
For more background on pipeline architecture, this overview of what a sales pipeline is and how to structure one covers the foundational concepts.
Metrics That Belong in Each Report
Choosing the right metrics is where most teams get this wrong. More metrics is not better. Each report should include only the KPIs that inform a decision or prompt an action.
For your pipeline report:
- Deal count and total value per stage
- Pipeline coverage ratio (total pipeline value divided by revenue target for the period; a healthy ratio is 3x to 4x)
- Deal age and time in current stage
- Weighted pipeline value alongside actual value
For rep performance:
- Quota attainment percentage
- Win rate (deals won divided by total deals advanced past a defined qualifying stage)
- Average deal size and average sales cycle length
- Activity volume: calls, emails, meetings, proposals sent
For forecast:
- Committed deals: value and count
- Weighted pipeline for the period
- Previous period forecast accuracy (how close your last forecast was to actual results)
For stalled deals:
- Days since last stage change
- Days since last logged activity
- Deal owner and value
One metric that belongs on every report but rarely appears by default is data completeness. If 30% of your deals are missing close dates or have no logged activity in 14 days, the rest of your reporting is unreliable. Bad data produces more than bad reports. It produces reports that managers stop trusting. When that happens, decisions get made on gut feeling instead.
How to Structure a Weekly Pipeline Review
A structured weekly pipeline review is the highest-leverage habit a sales team can build around CRM reporting. It does not need to be long. Thirty minutes with the right reports pulled in advance covers more ground than two hours of unstructured discussion.
Pull four reports before the meeting: the stage-by-stage pipeline view, the stalled deals list, the rep performance summary for the week, and the forecast update. Work through them in that order.
Start with pipeline coverage. Is the pipeline at 3x the target for the current period? If not, this is the first thing to address, because everything downstream in your forecast depends on having enough active deals. Then move to stalled deals. Review each one briefly: is it actually alive, or should it be moved to closed-lost? Reps tend to keep deals open longer than they should, and every ghost deal in your pipeline adds noise to your forecast.
Finish with the forecast. Ask each rep to confirm their committed deals for the period and flag anything that has changed. The goal is not a perfect forecast number. It is a shared understanding of where revenue is coming from and where the gaps are.
One thing that consistently breaks pipeline reviews is the data not being current. If reps update their deals on Thursday morning because the review is Thursday afternoon, the report reflects that brief window of activity, not the actual week. Build the expectation that the CRM reflects deal reality at all times, not only in the hour before a meeting.
Building a Forecast Dashboard Managers Will Use Daily
A forecast dashboard is different from a report you pull on demand. It is a persistent view that managers check regularly without any manual steps, and it should be designed so that someone can read the key information in 30 seconds.
The most common mistake in dashboard design is including too many widgets. A dashboard with 15 metrics serves no one. Limit each dashboard to five to eight KPIs and group them logically. Place the most critical numbers at the top: pipeline coverage ratio, quota attainment for the current period, and weighted forecast value. Everything else is supporting context.
Structure the layout for scanning, not reading. Use visual cues: color coding for above and below target, trend lines that show direction over the past 30 days, and alert thresholds that trigger when coverage drops below 2x or win rate falls outside a normal range.
Role-specific dashboards outperform single shared views in most teams. A rep’s dashboard should show their personal pipeline, their activity for the week, and their quota progress. A manager’s dashboard should show team-level coverage, individual rep performance side by side, and the forecast. An executive view should show quota attainment by team or region and pipeline trends over time. What is actionable for a rep is noise for an executive, and vice versa.
Mria CRM, which runs natively inside Jira on Atlassian Forge, includes a sales dashboard built for teams that manage deals and contacts within Jira projects. More detail on that dashboard is in this overview of the Mria CRM sales dashboard release .
Common CRM Reporting Mistakes to Avoid
Most reporting failures are not technical problems. They are process problems that show up in the data.
Tracking Activity Instead of Buyer Progress
The most seductive CRM mistake is building dashboards full of activity metrics: calls made, emails sent, meetings booked. These numbers are easy to produce and easy to improve by doing more of the same thing. They are also almost entirely within the rep’s control. That controllability makes them feel useful.
The problem is that high activity with low pipeline movement means your reps are busy but not effective. Calls and emails measure what your team is doing. Stage progression and win rates measure whether any of it is working. Both matter, but managers who focus only on activity miss the signal in the outcome data.
Unclear Stage Definitions
If your pipeline stages do not have explicit entry and exit criteria, your pipeline report reflects the varying interpretations of ten different reps, not a consistent view of where your deals actually are. A deal in “Proposal Sent” might mean the formal document is in the client’s inbox, or it might mean someone mentioned a ballpark number on a call. These are not the same stage. When stage definitions are ambiguous, your stalled deals report and your forecast both become unreliable.
Skipping the Forecast Accuracy Audit
Most sales teams review their forecasts weekly but never review their forecasting accuracy historically. A forecast accuracy audit compares what you called at the start of a period to what actually closed. Teams with 20% or greater variance week over week typically have one of three problems: reps are over-qualifying early-stage deals, close dates are being extended repeatedly rather than moved to closed-lost, or stage probabilities do not reflect actual historical win rates.
Letting Data Quality Decay
CRM data decays faster than most managers account for. Job titles change, companies get acquired, contacts leave. A study of B2B sales teams found that roughly 70% of revenue leaders do not fully trust their own CRM data. When reps lose confidence in the data, they stop updating the CRM accurately, which accelerates the decay. The only fix is establishing a regular data hygiene cadence: audit completion rates on critical fields monthly, merge duplicates quarterly, and build required field validation into your pipeline stages so bad data cannot enter in the first place.
For a broader view of how performance management connects to reporting, this piece on sales performance management metrics and process covers the wider framework.
Best Practices for CRM Sales Reporting
A few practices separate teams that actually use their reporting from those that maintain impressive dashboards nobody acts on.
Start With the Decision, Not the Data
Every report should originate from a question that someone needs to answer. What is our forecast for this quarter? Which rep needs coaching support this month? Where are deals stalling in the pipeline? If you cannot name the decision the report informs, the report is probably not worth building. Teams that start with questions build concise, actionable reports. Teams that start with available data build dashboards that nobody opens after the first week.
Automate Recurring Delivery
Reports that require manual effort to produce get skipped during busy periods, which are exactly the periods when they matter most. Set up scheduled delivery for your weekly pipeline summary and your monthly performance recap. Automation eliminates the friction of generation and creates a natural audit trail. When something unusual appears in a weekly report, you can compare it against the previous four weeks without having to regenerate anything.
Make One Person Responsible for Data Quality
Data quality problems are governance problems, not technical ones. Assign a named owner, typically someone in RevOps or sales operations, who is responsible for running monthly data audits, enforcing field requirements, and flagging anomalies before they corrupt forecasts. Without a named owner, data hygiene becomes everyone’s problem. In practice, that means it becomes nobody’s problem.
Validate the Report Before Trusting the Metric
Before trusting a report, check whether the underlying data meets a basic quality threshold. If fewer than 70% of deals have close dates populated, your forecast report is not reporting your forecast. It is reporting the 70% of your pipeline that someone bothered to update. That is a different number, and acting on it as if it were complete will produce the wrong decisions.
A well-built CRM reporting setup does not take months to get right, but it does require working backward from decisions rather than forward from features. Start with the four core reports, define your pipeline stages clearly, build a dashboard your managers will actually check, and establish the hygiene habits to keep the data trustworthy. Everything else follows from that foundation.




