Retailers who select new systems make several common mistakes with their RFP’s. But there is one that stands out. I see it repeated everywhere I go. Here is how to avoid it. And make sure your next expensive investment pays out.
No test drive.
Imagine you were purchasing a new car. What are the steps you would take? Set your budget? Create a list of “must haves”? Go to dealerships? Probably. But your visceral experience with the cars would probably have a high impact on your decision. Because your hands-on test drive helps you truly understand what driving the car will be like. You will not get that through a Consumer Reports article. Or a question to reddit. Or reading the owner’s manual. Most people would laugh at anyone who forks over the purchase price of a new car without a test drive.
But retailers spend far more than the price of a car – and take on much more risk – without a system test drive.
Time and again. Retailers send out an RFI – request for information. (Isn’t that just like asking car makers to send some marketing collateral?) Then they select the most promising systems. Then they send out the RFP – request for proposal – to the most promising candidates. Too often the retailer takes the “owner’s manual” for their current system to write the RFP for a new system. (Does your software require an oil change very 3000 miles? Or 5000 miles? How often do I need to rotate the tires? Does it have a backup camera?) The solution providers basically send on an easy-to-read owner’s manual. You would never buy a car based on that.
Of course not. That is why retailers usually do some research with current user references. Which is like reading owner forums to see what usually goes wrong with the vehicle. Maybe they also tap into Gartner or Forester to understand the system’s reputation. Sort of like reading Consumer Reports or looking at JDPower reports. The top few candidates are able to demo the system. Again, that is a lot like watching the car model you are considering driven around a race track by a professional driver. Not exactly helpful when “your mileage may vary.” That is usually as far as the selection review goes. It is at that point that retailers make their multi-million dollar decision.
To be fair, my role in all this (as a consultant and advisor) is to act much like a mechanic. Take a look at the retailer’s situation and the solution and use my experience to help steer them toward the best option.
Why don’t retailers do more test drives? Call them proof of concepts or conference room pilots. There should be hands-on experience using the retailer’s own data before making a purchase commitment.
It’s funny when retailers tell me it is too expensive to conduct proof of concepts. Or that they do not have the (1) time (2) personnel (3) need. Because tearing out a failed implementation will be much more expensive.
In an industry that needs to constantly innovate, test drives allow retailers to experiment, learn and adapt much more quickly than in a full-blown implementation. For example, all retailers are trying to master AI and ML to make better decisions for assortment, marketing and pricing. But they don’t actually know what they want. How could they? They have never used it in that context or scale before. They may have a few use cases built. But they cannot actually know how the solutions they select will meet those use cases until they get a chance to test drive it for themselves.
As a best practice within a selection process, retailers should set up “bake offs” with their top candidates. (Assuming TCO – total cost of ownership – is within a reasonable range across choices.) Let super users “kick the tires” by feeding a test site with a few stores or categories of data. Even if it takes a few weeks to set up, the insight and accelerated adoption will be worth it. Challenge solution providers to create a test that can be evaluated in 6-8 weeks for a nominal fee. (No, this isn’t going to be free . But it will deeply reduce the risk of a bad purchase decision.)
Retailers tend to do the same thing: Delay decisions on system upgrades and implementations until the current system is at nearly critical stage. Then they rush into a decision without ever test driving the solution with their own data. I wish they could just see how much better their decisions would be with one simple test drive.