Why we ran this study
Our broader 350-call talk-ratio study measured behavioral averages across the rep side of the call. It did not isolate what the prospect was saying. We wanted a tighter cut: take only the cleanest closed-won and closed-lost outcomes from the same dataset, pair them by vertical and deal size, and ask what specific phrases the buyer used in each group. If certain prospect utterances are near-perfect predictors of outcome, real-time detection of those phrases is one of the highest-leverage things a coaching product can do.
The existing industry literature on buying signals tends to be anecdotal. Sales books list phrases without sample sizes or matched controls. Vendor white papers cite aggregate close-rate lifts without showing the underlying phrase frequencies. This study contributes paired-design data: every closed-won call has a matched closed-lost twin from the same vertical and same ACV band, so the differences cannot be explained by deal type alone.
Methodology
Sample. 94 paired B2B sales calls drawn from the broader Nimitai 350-call corpus collected April through December 2024. 47 calls had a closed-won outcome within 90 days. 47 calls had a closed-lost outcome within 90 days. Pairs were matched on three variables: industry vertical (SaaS, professional services, fintech), ACV band ($5K to $25K, $25K to $75K, $75K to $250K), and call stage (discovery, demo, close). Within each cell, calls were randomly assigned to the outcome group.
Labeling protocol. Two reviewers independently labeled every call against a 5-category buying-signal rubric (timeline, multi-threading, integration, pricing specificity, urgency) and a 5-category dismissal-signal rubric (thinking-about-it, send-info, status-quo, budget, internal-check). Inter-rater agreement was 0.78 (Cohen's kappa) on buying signals and 0.83 on dismissal signals. Disagreements were resolved by a third reviewer. Time-stamped latency (first signal to first rep acknowledgement) was extracted automatically from the diarized transcripts.
Statistical methods. Per-phrase frequencies report raw counts and percentages of the relevant outcome group. Signal density per stage reports means. The response-latency-to-close-rate analysis uses a logistic regression on the binary outcome with latency bin as a categorical predictor; the 2.1x effect is the odds ratio for the under-15-second bin versus the over-30-second bin.
Limitations. The 94-call sub-sample is small for any single phrase frequency, so individual percentages should be read as directional rather than precise. The matching procedure controls for vertical, ACV, and stage but not for seller seniority or product category within a vertical. The 90-day outcome window may under-count enterprise deals with longer cycles. The full methodology document linked below covers reviewer rubrics in detail.
Top 5 buying signals (closed-won)
The five phrases below appeared at least 4x more frequently on closed-won calls than on closed-lost calls. Each is reported with its raw frequency in both groups and a brief interpretive note. The phrase strings are paraphrases of the labeled category, not literal verbatim across every call.
- Signal 01
"What does implementation look like?"
Appeared on 38 of 47 closed-won calls (81%) and only 9 of 47 closed-lost calls (19%). Almost always preceded the budget conversation by 4 to 8 minutes. The prospect is mentally fast-forwarding past the buying decision and asking how the product gets into their workflow.
- Signal 02
"Can I bring [name] in on the next call?"
Appeared on 31 of 47 closed-won calls (66%) and 4 of 47 closed-lost calls (9%). Multi-threading initiation is one of the highest-signal buying cues in B2B. The prospect is recruiting the rest of the buying committee. The reverse pattern (closed-lost) is a single contact insisting they alone can decide.
- Signal 03
"How would this work with our [existing tool]?"
Appeared on 29 of 47 closed-won calls (62%) and 11 of 47 closed-lost calls (23%). Integration questions reflect a prospect already imagining the post-purchase state. Reps who answered with a specific named integration (rather than a vague "we plug into most tools") moved the conversation forward in 27 of those 29 cases.
- Signal 04
"When could we get started?"
Appeared on 26 of 47 closed-won calls (55%) and 3 of 47 closed-lost calls (6%). Direct timing pull is a near-perfect commercial signal. The risk is the rep treating it as a chance to pitch features rather than as the buying signal it is. Reps who responded with a specific date offer closed 24 of 26.
- Signal 05
"What does pricing look like for [specific scenario]?"
Appeared on 24 of 47 closed-won calls (51%) and 8 of 47 closed-lost calls (17%). Note the construction: not "what is your pricing" but "what does pricing look like for my specific situation." The prospect is co-building the proposal in their head. Reps who walked through a concrete number on the call closed 21 of 24.
Top 5 dismissal signals (closed-lost)
The mirror image. These five phrases appeared at least 3x more frequently on closed-lost calls. The pattern across all five is that they sound polite but functionally end the synchronous conversation. The closed-won exceptions are instructive: in nearly every case the rep refused to accept the polite version and probed for the specific underlying objection.
- Dismissal 01
"I'll need to think about it."
Appeared on 32 of 47 closed-lost calls (68%) and only 5 of 47 closed-won calls (11%). The classic polite-disengagement phrase. The closed-won cases were ones where the rep immediately probed ("Help me understand what specifically you want to think through?") and the prospect re-engaged. The closed-lost cases were ones where the rep accepted the phrase and moved to scheduling a follow-up that never happened.
- Dismissal 02
"Send me some information."
Appeared on 28 of 47 closed-lost calls (60%) and 4 of 47 closed-won calls (9%). A request for asynchronous material is almost always a request to end the synchronous conversation. The 4 closed-won exceptions were cases where the rep asked the prospect to share the material with a named decision-maker, which converted a dismissal into a multi-threading event.
- Dismissal 03
"We're happy with what we have."
Appeared on 21 of 47 closed-lost calls (45%) and 0 of 47 closed-won calls. This is the only signal in the study that hit zero on the closed-won side. Once spoken, the call rarely recovers in the same session. The leverage is upstream: discovery questions earlier in the call should surface this view before the rep starts pitching.
- Dismissal 04
"Budget is tight this quarter."
Appeared on 19 of 47 closed-lost calls (40%) and 6 of 47 closed-won calls (13%). Budget objections are recoverable when the rep explores the budget cycle ("When does next quarter's budget get approved?") rather than countering on price. The 6 closed-won cases all involved the rep mapping the deal to a future budget window rather than pushing for current-quarter close.
- Dismissal 05
"Let me check with the team."
Appeared on 18 of 47 closed-lost calls (38%) and 12 of 47 closed-won calls (26%). The least clean dismissal signal, because it can be either real internal alignment or polite deflection. The differentiator was specificity: closed-won reps asked "Who specifically do you need to check with, and what would help them say yes?" and got a name. Closed-lost reps accepted the vague version.
Signal density by call stage
Buying signals do not arrive at a constant rate across a deal cycle. They cluster toward the close, and the rate of clustering is itself a leading indicator of outcome. The table below reports mean buying-signal counts per call at each stage, split by eventual outcome.
| Call stage | Closed-won | Closed-lost | Note |
|---|---|---|---|
| Discovery | 0.6 signals per call (avg) | 0.3 signals per call (avg) | Buying signals appear earliest in winning calls because the discovery questions surface them. Losing calls go signal-light through discovery because the rep is monologuing and the prospect has no opening to volunteer intent. |
| Demo | 2.4 signals per call (avg) | 0.7 signals per call (avg) | The demo stage is where the gap widens the most. Closed-won prospects use the demo as a buying simulation, asking implementation and integration questions. Closed-lost prospects stay passive or ask cosmetic feature questions that do not commit them. |
| Close | 3.1 signals per call (avg) | 0.4 signals per call (avg) | By the close stage the gap is 7.75x. Closed-lost calls reaching this stage are typically dragged there by the rep rather than pulled there by the prospect. The signal density at close is the single best leading indicator of which deals will land in the 90-day window. |
The practical takeaway: managers running pipeline reviews should ask reps to point to specific buying-signal moments in the last call. If a rep cannot name one in a deal that is supposedly progressing, the deal is probably not progressing. Signal density is a better forecast input than rep optimism.
The 15-second window finding
The single most actionable finding from the study is not which signals matter, but how fast the rep responds when one surfaces. On closed-won calls, the average gap between the first verbal buying signal and the rep's acknowledgement of it was 8 seconds. On closed-lost calls, the average gap was 47 seconds. In many closed-lost cases the rep never acknowledged the signal at all and instead continued reading from a slide.
Binning the response latency into three buckets (under 15 seconds, 15 to 30 seconds, over 30 seconds) and running a logistic regression on the closed-within-90-days outcome, reps in the under-15-second bin closed at 2.1x the rate of reps in the over-30-second bin. The 15-to-30-second middle bin sat in between at roughly 1.4x. The effect held within every vertical we tested and within every ACV band.
This is the finding that makes real-time coaching commercially meaningful. Post-call review can teach a rep to recognize signals on the next call. Only real-time alerting can shorten the response latency on the current call. A post-call dashboard that flags a missed signal 24 hours later does not move this number. An in-call nudge fired within sub-200ms of the signal phrase being detected does.
Implications for sales leaders
Three operational changes follow directly from the data. First, build the five top buying signals into the call-review rubric. When a manager listens to a call, the rubric question is not "did the rep ask good questions" but "how many of the top five buying signals appeared, and how quickly did the rep acknowledge each one." This is a more concrete coaching prompt than abstract communication feedback.
Second, train reps on the inverse of the dismissal phrases. Reps cannot prevent prospects from using "I'll need to think about it." They can, however, decide in advance how they will respond. The most effective response we observed was a specific, non-leading probe: "Help me understand what specifically you want to think through." On the 5 closed-won calls where this phrase appeared, the rep used some variant of that probe in 4 of them.
Third, for teams that can adopt real-time coaching, the 15-second window is the metric to operate against. The current average is 47 seconds on losing calls. Even a partial improvement to 30 seconds shifts a meaningful share of deals into the under-30-second bucket. See our companion analysis on buyer intent signals in sales calls and on why reps miss buying signals for the practitioner-facing version.
How Nimitai uses this data in product
The five top buying signals and five top dismissal signals are encoded directly into Nimitai's real-time alert layer. When a phrase matching one of the categories surfaces in a live call, the rep sees an in-call card within sub-200ms identifying the signal type and suggesting a response framing. The response-latency metric is logged per call and surfaced in the weekly rep report, alongside talk ratio and open-ended question count. Sales leaders see a team-wide latency distribution in the manager dashboard.
The buying-signal alert system is live in the Nimitai real-time AI meeting copilot. The rubric is surfaced as automated coaching insight in the AI sales coaching dashboard. Pricing starts at $149 per seat per month.
Citations and methodology download
This study should be read alongside the related Nimitai talk-ratio research and the published industry literature on buyer behavior in B2B sales. The buying-signal categories we used were informed by but adapted from the MEDDPICC framework's identify-pain and decision-process criteria.
- Nimitai Talk-Ratio Research Study (350 B2B calls) — the parent dataset this sub-study was drawn from.
- Gong: State of Revenue Intelligence (resource library) — industry comparison data on buyer behavior signals.
- Wikipedia: Conversation Intelligence — background on the analytical category.
- Related practitioner pieces: Buyer Intent Signals in Sales Calls and Why Sales Reps Miss Buying Signals.
The full methodology document, including reviewer rubrics, per-vertical phrase tables, and the logistic-regression output for the latency analysis, is available as a downloadable PDF: Download the methodology PDF.