I Like Market Data

Even before they contacted me to ask if I was interested in writing about it, I’d seen that Krush had released the Executive Summary for the Krush Buyers Report and it seemed an intriguing idea. Most of my five loyal readers know I think three things about market research and data.

First, that we don’t get enough of it, though that seems to be changing. Second, that when we do get good data, we, as an industry, sometimes don’t know how to make use of it so don’t really understand the value. That’s a particular problem if you’re a company trying to sell the data you worked hard to collect. Third, you have to drill down on the methodology to understand the strengths and weaknesses of the collection and analysis process. That’s true for any process; not just Krush’s.
What Krush did was invite “…a select group of brands exhibiting at AGENDA to preview their upcoming Spring 2012 lines to their consumer fan base using KRUSH technology.”  Then they had, “Consumers – thousands who represent brands’ trend setters as well as their “mall-shopping” followers – rated each item from new collections in an online SneakPeek™ event on KRUSH. “ They generated 135,217 ratings of 1,059 different products that included pretty much everything but hard goods.
The idea is that Krush can take this data and generate a report for each brand that indicates what is likely to sell (or not sell) well when the product hits the retailer, thereby allowing the brand to positively influence retailer buys and sell through and perhaps adjust what they produce.   The question I asked immediately, that you are no doubt asking and, to be fair, that Krush acknowledges in their summary as key is, “How do we know these results will really be predictive of what people will buy?”
Krush has done this before with scores of brand, and tells me the data is “predictive” of what’s actually sold. But of course as a brand, you won’t “know” how it worked until the product hits the retailer and you compare what sells with what the reports said would sell. Let’s take a closer look at their methodology as they describe it and see if there are some obvious questions we might ask.
Here’s how Krush starts its description of its methodology:
“KRUSH uses proprietary Crowdsource™ technology that leverages proven predictive models, game theory and online social networking technologies to capture, systematize and analyze large amounts of data designed to assist brands in improving demand forecasting and sales of goods in the Action Sports and Lifestyle industry.”
Well, that sounds impressive and seems kind of intuitively reasonable. Trouble is, I have no idea what it means. They go on to say their “Web-based platform allows manufacturers to preview items from upcoming lines to consumers and fans of the brand up to one year before market introduction in an online SneakPeekTM event.”
Now we’re getting somewhere. Consumers see the product on line and tell you what they think about it. I understand that, though the process is much more complicated than I’ve glibly described it. Here’s what Krush says:
“…manufacturers submit their line-ups to KRUSH which are entered into our system and tagged for features such as color way, material, style, trim, accents etc. KRUSH then leverages social networking communities to identify consumers and fans of the brand, then invites selected individuals to preview and rate the latest items. Using an intuitive rating system, consumers indicate whether they like or dislike each item in the line. Within days there are enough ratings to produce accurate predictive data…”
Now, I think, some of the questions you might want to ask about the process become obvious.
1.       The participants are “self-selected” as fans of the brand. They are, for lack of a better word, “core” customers. Does that selection process bias them towards the brand? Maybe that’s a good thing.
2.       What exactly do they see on line? One piece? All the brand’s pieces? A brand’s piece compared with comparable pieces from other brands? Line drawings or actual photos? In one color? Which color? On a model or just the item by itself?
3.       Is the comparison valid across categories?  That is, should we be asking just if the consumer likes this hat better than another hat or is the right question if they’d chose the hat over the t-shirt or the pair of pants or another brand’s hat?
4.       Will what a consumer is disposed to buy based on what they see in the survey be what they are disposed to buy in an actual store or an online site?
5.       Are suggested retail prices featured in the online survey and is price taken into account in estimating probability of purchase?
6.       Why do we believe that months later a customer’s bias will remain the same? I suppose that’s a problem we have to deal with no matter what.
7.       80% are male and 63% are between the age of 15 and 20. What do you do with this data if that isn’t your market?
Krush goes on to say, “Each item within a SneakPeek and the SneakPeek as a whole are then scored using a combination of proprietary popularity and ranking algorithms that incorporate preferences, passions, purchasing intent, demographics, influence, and other factors. The items are then normalized and ranked, with the highest scores indicating the most popular items and the lowest scores reflecting the least popular items in the line.”
You can see from that quote they are trying to deal with some of the questions I raised. In fact, if you go to this link and then hit “continue browsing” you can see the brands they are, have, and will be collecting information on. In fact, you can rate the open brands’ products.
When you do this, you’ll be asked if you “like” or “dislike” the product. The price is given and you can “Buy It Now” or reserve it if it’s not yet available. There’s also a box for a comment.   Go do it yourself. It’s better than my description.
Spending a couple of minutes doing this answers a few of the questions I asked (and suggests that maybe I should have spent time on the web site before asking them) and raises some new ones. For example, it seems that I can give my opinion on any product I want. Trouble is, cool as I am, I am not the target demographic. My opinion messes things up. But then, when I try to say I “like” something, it makes me login to my Facebook account. Krush wants access to my information on Facebook and wants permission to email me. I wasn’t prepared to do that. Maybe that helps to self-select against people they really don’t want in the sample.
Where the item is available now, the price is shown and you are offered the chance to buy it. You see a standalone picture of the item. Sometimes it’s on a model. As you can see, each brand apparently has considerable flexibility in how they show their products. And of course, you’re being asked to like or dislike a single product in isolation from other brands, colors, and merchandising presentation. Well, nobody is suggesting that isolating and figuring out consumer motivations is easy.
You, as a brand, have some beliefs, probably both anecdotal and empirical, about who your customers are and why they buy your product. Maybe Krush’s process will allow you to test those. In any event, you’d certainly want to know something about how the different factors are ranked, which is given more or less emphasis, and why. As Krush notes, the participants identify themselves as “core” (not Krush’s word) consumers. Is that your market?
While we’re on the subject of trying to ferret out consumer motivations, you might go read a book called Blink, by Malcolm Gladwell. He’s the guy who brought us The Tipping PointHere’s the ubiquitous Amazon link to Blink.
I think what Krush is doing is a good idea. Of course, if it works well, you won’t have any choice but to use it because everybody else will be. It could become sort of like, “Show me the CarFax!” That would have to be Krush’s wildest dream.
My guess is that it will be like all the market research I’ve ever seen. Not a panacea, but useful if you’re thoughtful about how it’s done and how you use it. To use Krush’s word again, it’s “predictive.” It’s not going to tell a brand precisely what and how much to make and a retailer what and how much to buy. The point of this article is that with Krush’s market research, or anybody else’s, you’ve got to dive deep into the details before deciding to participate and if you decide to participate to get real value out of it.

 

 

5 replies
  1. Cary
    Cary says:

    Jeff,
    I think this research method has some good potential. It takes the “focus group” concept to a level that could actually provide some real value. One of the biggest problems with focus groups is that they are so expensive it is difficult to get enough opinions to get any real value out of them. This is making a focus group cheap (I don’t actually know the cost, but I assume it is relatively cheap compared to a focus group on a per-respondent basis).

    However, one of the biggest challenges with asking consumers about what they plan to purchase is that they can be extremely fickle. It has been well documented at how bad consumers are at predicting what they will purchase in the future — especially with a fashion product as opposed to a more utilitarian product. Fortunately, this model should be fairly easy and inexpensive to test. If a brand finds that it is quite predictive for them, then they can continue to use it.

    Reply
    • jeff
      jeff says:

      Hi Cary,
      Everybody seems to think the same thing about Krush- potentially a good idea, success will vary by brand, and it’s not a panacea. Consumers, as you say, are unpredictable. Go read Blink if you want to read some interesting stories about how we actually make decisions compared to how we think we make them.

      Thanks,
      J.

      Reply
  2. Joe Burlo
    Joe Burlo says:

    Jeff, Krush have gone to great lenghts to make their analysis as predictive as possible in the hope that it will pre empt what a shopper will decide to purchase at a future date which will hopefully be the time that the Brand’s line is actually out in the stores. What it fails to do is predict what the participating consumer will do when he is actually in the store and is faced with multiple choices from many different Brands that the likes – so even though he might ‘like’ or ‘buy it now’, in reality unless he has unlimited funds and can buy everything that he likes, he will only buy one product and that might not be the same brand that he ‘liked’ – so just because a certain percentage ‘liked’ a product does not mean that they will all go out and buy it.

    Having been a Brand owner for many years, I have always found that when conducting market research before the launch of a new product or graphic, when I asked the core followers of the brand if they liked a new product the reply that they gave was, in the main, very different to what the majority of consumers actually ended up doing.

    So, as you say Jeff, each Brand must decide what value to attach to pre market release research in relation to their final decision.

    Reply
    • jeff
      jeff says:

      Hi Joe,
      I don’t have much in the way of a response because I pretty much agree with you. What the consumer says they will do and what they actually do are two different things.
      Thanks,
      J.

      Reply
  3. Cary
    Cary says:

    Jeff,
    Blink was required reading for one of my b-school classes. Although I don’t remember the details of the book it probably has affected my view of this type of research.

    Reply

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *