Impact of “Big Data” on Retail: The McKinsey View (Part 1 of 2)

The McKinsey Global Institute (MGI) , the research arm of consulting firm McKinsey & Company (disclosure: I am an alum) released a report on Big Data and analytics last week. I have summarized key insights from the retail section of the report at the CQuotient blog. Please head over there if you are interested. Thanks.

 

Share/Bookmark

Measuring Promotional Effectiveness is Getting Harder

Last week, I read about the results of a promotion run by location service Foursquare and retailer RadioShack.

RadioShack is giving Foursquare users who “check in” to its 5,000-plus locations special discounts for doing so. Those checking in for the first time receive 20% off qualifying purchases, as do “mayors” (designated users who frequently check into a location). All other users who check in receive a 10% discount.

How did the promotion do? Apparently, very well.

RadioShack customers who use the location-based mobile application Foursquare generally spend three and a half times more than non-Foursquare users, said Lee Applbaum, CMO of RadioShack, while speaking at the Ad Age Digital Conference. The retailer noted that Foursquare users spend more because they tend to purchase higher-priced items like wireless devices.

My first reaction was, “Sure, these users spent more but how do we know it is incremental? Was there a control group?” That got me thinking about how we would design an experiment to measure the incremental impact of such a promotion.

The simplest way to set this up would be to randomly divide the population of Foursquare users into a Test group and a Control group. The Test group customers would get a pop-up message on their smartphone/tablet when they were near a RadioShack alerting them to the promotion. The Control group won’t get this message. You wait for a month and calculate the difference in spend-per-Test-customer and the spend-per-Control-customer to get at the incremental spend per customer (this isn’t quiet correct since we are ignoring time-shifting effects like purchase acceleration but that’s a topic for another post).

But this simple-minded scheme won’t survive contact with reality.

  • We have sites (example) that are on the lookout for Foursquare promotions and publicize them to their visitors. If a Control group customer visits these sites, they  are “exposed” to the promotion and should no longer be in the control group.  Unfortunately, we can’t adjust the numbers to account for this move since we have no way of knowing if any particular Control customer was exposed or not.
  • Last week, I blogged about the issues posed by social-media-driven coupon sharing. Obviously, that applies here as well. I tell my friends and family about this cool RadioShack promotion and – boom! – the Control group takes another hit. At least, in this scenario, if we have access to the social graph of the sharing user, we can (theoretically) check if the sharer and their immediate connections are in the control group and exclude them from the analysis. Easier said than done, since it is not clear how we would get our hands on the data. But the data exists.
  • It is in the interest of both Foursquare and RadioShack to get the word out as much as possible, since that increases the amount of total sales from the promotion. The persnickety concern that incremental sales may be zero (or worse) may not get much airtime with the “bias to action” crowd 🙂

In general, the uncontrolled spread of promotions through indirect sharing (via websites) and direct sharing (through Facebook/Twitter etc.) taints control groups and makes incremental measurement tricky. We need to find a way to around this problem.

Any ideas?

(cross-posted from the CQuotient blog)