August 2, 2023 4:02 pm

David Wadler

Pete (my business partner) and I are statistics nerds. As a child, I used to come home from school and dive into the sports section of the newspaper. I would read all the baseball box scores in the newspaper; they told me a story. As an adult, Pete would build a machine-learning model based on umpire called strike/ball data. And not long after we met, the two of us wrote some code to search for arbitrage opportunities using published data from Lending Club. (We were too late, it turns out.)

When I came across Loopio’s blog post, entitled 51 RFP Statistics on Win Rates & Proposal Management, I was excited. (Confused? Go back to the first sentence of this email and think about the subject matter that is covered on this website. Statistics and RFPs? Yeah, that lights my fire.) However, my enthusiasm dissipated quickly as I read through the piece. There were numerous claims that simply didn’t pass the sniff test.

Many of us are familiar with Benjamin Disraeli’s famous quote: “There are three kinds of lies: lies, damned lies, and statistics.” Less well known is Henry Clay’s comment: “Statistics are no substitute for judgment.” That’s my primary concern with the Loopio piece. Working through the data and the conclusions drawn from is should have resulted in their questioning the results or at least moderating their claims. In the paragraphs that follow, I’ll pull out a few of the findings from the Loopio piece and evaluate their plausibility. Without further ado, let’s get into it….

Claim: "The average RFP win rate is 44%.‍"

Why it fails the sniff test: This statistic suggests that the odds of winning an RFP are only slightly worse than flipping a coin. Surely if that were the case, there would be far less groaning from sales professionals who find RFPs in their inboxes! Here’s the problem with the win rate: it is far too high. It would suggest that there are a large percentage of RFPs that are issued to just one or two suppliers, which — in most cases — would defeat the purpose of an RFP.

While it’s true that we build software that, among other things, helps businesses quickly craft top-flight RFP responses, we have also spent years delivering software that helps corporate buyers issue RFPs. Our data indicates that the average number of suppliers invited to an RFP is 9.8. The median number is five. And the mean supplier count for those who agree to participate: 4.77. I feel good about our data in large part because it is largely corroborated. The Procurement Cube shares that the typical RFP is issued to five to ten suppliers.

Let's switch back to thinking about win rates. If contracts were awarded randomly, this would suggest an average win rate for between 10% and 20%. What’s really interesting is their reported average win rate. Are you ready?  4.2%. They write at length about the discrepancy between an expected win rate of ~15% and an actual win rate of 4.2%. But the big takeaway, for the purpose of the blog post that you’re currently reading, is that Loopio’s number is ten times higher than The Procurement Cube’s.

Claim: “The average RFP advancement rate is 55%.”

Why it fails the sniff test: On its face, this number does not seem unreasonable. The RFP is a critical step in the sourcing process, but it is just one step of many. And if you consider that the goal of many of these steps is apply a filter, then removing roughly half of the vendors after the RFP can really help a buyer narrow its focus. “Okay, David, so you’re saying that this data seems okay. Why are you including it in this blog post.”

It's important to consider the value of other data in providing context within which one can best evaluate information. In this case, we’re going to jump back to the previous claim about win rate. What are the implications of these data when considered together?

  • A supplier gets invited to 100 RFPs.
  • A supplier advances in 55 of those RFPs.
  • A supplier wins 44 of the RFPs.

This means that the win percentage after advancing is 80%! Now try to wrap your head around what this means on a per-RFP basis.

  • A buyer invites 7 vendors to an RFP.
  • 4 of those 7 vendors advance (57%).
  • 3 of those vendors win the RFP (43% of the invitees/75% of the advancees)

It goes without saying that most RFPs don’t result in awards for multiple vendors, which makes it hard to accept the validity of the data. If the reported data is correct, then the reams of data around RFP issuance are incorrect and that doesn't seem likely.

Claim: "The top reason for losing a bid? ‘Price rises.'"

Why it fails the sniff test: The top reason is that suppliers typically never learn why they didn’t win an RFP. Indeed, it is often fodder for speculation. “We met all their criteria. It was a perfect match. How could we lose?” Putting aside the source of the data for now, let’s think about what losing on price means in practice. First, we need to revisit the idea that sourcing is a multi-step process. Now, imagine that you’re in the buyer’s shoes…. You’ve received numerous RFP responses and several vendors have offerings that align very well with your requirements. Do you simply award a contract to the one with the lowest price?

You could, but procurement departments are, by their nature, favorably inclined to negotiate. And that is what usually happens. Unquestionably, price plays a significant factor in a vendor’s ability to win business, but RFPs are designed for multi-factorial, complex purchase decisions and reducing a win/loss outcome to cost doesn’t seem to hold up.

The Underlying Issue: Poor Data

There are a number of reasons that a data set can be misleading. Kudos to Loopio for sharing information about their sources. They surveyed “1,500 people around the world involved in responding to RFPs,” which seems like a reasonable number. So where did things go wrong? I don’t know for sure, but I have some theories.

Selection bias

This occurs when the data is not representative of the population that it is supposed to represent. For example, if you are trying to determine the average height of American adults, but you only measure the height of people who are over 6 feet tall, then your data will be biased.

How it applies: Well over half of the people polled are reported to be RFP/Proposal Managers or RFP/Proposal Writers. Companies that employ full-time proposal writers and managers tend to have two things in common:

  • They have enough scale to have employees dedicated to the RFP response process. This is not the case for most small businesses, which due simply to the size of their market segment, probably reply to more RFPs than large enterprises.
  • They generate enough business via RFP to justify employing full-time RFP response resources.

A minority of all businesses have personnel whose job is solely to reply to RFPs, but businesses that do make up a majority of the data set.

Measurement error

This occurs when there is an error in the way that the data is collected. For example, if you are trying to measure the temperature of a room, but your thermometer is not calibrated correctly, then your data will be inaccurate.

How it applies: The data was collected by surveying individuals. Eyewitness testimony is increasingly subject to scientific scrutiny and is broadly characterized as unreliable. Indeed, human psychology gets the best of all of us. Recency bias, confirmation bias, and other biases may well be at work in the data, particularly for reporting around win rate. If all the surveyed users confirmed that completed RFPs and their outcomes were logged in systems, I would feel more comfortable with the accuracy.

Response bias

This occurs when people answer questions in a way that they think is socially desirable, rather than in a way that is truthful. For example, if you ask people how much money they make, some people may lie and say that they make more money than they actually do. This can skew the results of the survey.

How it applies: Again, human psychology could be at work here. Specifically, there is a particular incentive for RFP proposal managers and writers to validate the importance of their jobs. High win rates reflect well on them and could be persuasive to organizations that are considering adding or removing personnel dedicated to RFP responses.

Conclusion

In this blog post, we have seen how statistics can be misleading, confusing, or inaccurate if we don’t pay attention to the context, the source, and the methods behind them. We have also learned some tips and tricks to avoid falling for common statistical fallacies and to critically evaluate the data we encounter in our daily lives. Statistics can be powerful tools to inform our decisions and opinions, but only if we use them wisely and responsibly. The next time you read a headline or a report that claims something based on statistics, don’t take it at face value. Dig deeper, ask questions, and think for yourself. You might be surprised by what you uncover.

About the Author

David Wadler is a co-founder and Chief Revenue Officer at Vendorful. Prior to Vendorful, he was the General Manager for Rich Media & Cloud at Lexmark Enterprise Software, where he was responsible for strategic direction of Lexmark’s initiatives as they related to rich media and cloud products. He came to Lexmark in 2013 with the acquisition of Twistage, where he was a co-founder and CEO. Prior to Twistage, he worked in a variety of industries and roles while trying to figure out what he was supposed to do with himself. David is a holder of a degree in economics from Brown University and is a resident of New York City.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}