Measuring Promotional Effectiveness is Getting Harder

Last week, I read about the results of a promotion run by location service Foursquare and retailer RadioShack.

RadioShack is giving Foursquare users who “check in” to its 5,000-plus locations special discounts for doing so. Those checking in for the first time receive 20% off qualifying purchases, as do “mayors” (designated users who frequently check into a location). All other users who check in receive a 10% discount.

How did the promotion do? Apparently, very well.

RadioShack customers who use the location-based mobile application Foursquare generally spend three and a half times more than non-Foursquare users, said Lee Applbaum, CMO of RadioShack, while speaking at the Ad Age Digital Conference. The retailer noted that Foursquare users spend more because they tend to purchase higher-priced items like wireless devices.

My first reaction was, “Sure, these users spent more but how do we know it is incremental? Was there a control group?” That got me thinking about how we would design an experiment to measure the incremental impact of such a promotion.

The simplest way to set this up would be to randomly divide the population of Foursquare users into a Test group and a Control group. The Test group customers would get a pop-up message on their smartphone/tablet when they were near a RadioShack alerting them to the promotion. The Control group won’t get this message. You wait for a month and calculate the difference in spend-per-Test-customer and the spend-per-Control-customer to get at the incremental spend per customer (this isn’t quiet correct since we are ignoring time-shifting effects like purchase acceleration but that’s a topic for another post).

But this simple-minded scheme won’t survive contact with reality.

  • We have sites (example) that are on the lookout for Foursquare promotions and publicize them to their visitors. If a Control group customer visits these sites, they  are “exposed” to the promotion and should no longer be in the control group.  Unfortunately, we can’t adjust the numbers to account for this move since we have no way of knowing if any particular Control customer was exposed or not.
  • Last week, I blogged about the issues posed by social-media-driven coupon sharing. Obviously, that applies here as well. I tell my friends and family about this cool RadioShack promotion and – boom! – the Control group takes another hit. At least, in this scenario, if we have access to the social graph of the sharing user, we can (theoretically) check if the sharer and their immediate connections are in the control group and exclude them from the analysis. Easier said than done, since it is not clear how we would get our hands on the data. But the data exists.
  • It is in the interest of both Foursquare and RadioShack to get the word out as much as possible, since that increases the amount of total sales from the promotion. The persnickety concern that incremental sales may be zero (or worse) may not get much airtime with the “bias to action” crowd 🙂

In general, the uncontrolled spread of promotions through indirect sharing (via websites) and direct sharing (through Facebook/Twitter etc.) taints control groups and makes incremental measurement tricky. We need to find a way to around this problem.

Any ideas?

(cross-posted from the CQuotient blog)

Share/Bookmark

2 thoughts on “Measuring Promotional Effectiveness is Getting Harder”

  1. Karl,
    Thanks for your comment.

    You’re right; we can estimate a no-promotion baseline and thereby estimate the incremental impact. In fact, as you may know, this is standard practice for measuring the impact of mass (i.e., non-individualized) promotions. However, the results have to be taken with a pinch of salt due to the assumptions involved and lack the “table pounding” unambiguousness of a carefully designed test/control setup.

    Rama

  2. Rama,
    Perhaps you can segment your retail locations demographically – even with “most people” having smart phones, I have to imagine you could find similar income demographic stores where there were fewer smart phones or at least Foursquare users — yes you can’t be 100% certain. Also – do you have access to multiple years of data, finding similar promotions in the data and then comparing across time (segmenting before and after Foursquare) might get you there. A lot of work – but then I’m sure you can find a small team of smart interns in Cambridge who’d make Taguchi proud.

    Karl

Leave a Reply

Your email address will not be published. Required fields are marked *