The Ascent of Ranking Algorithms

Algorithmic ranking is on the rise. Everywhere I turn, something or the other is being ranked analytically.

Ranking web pages based on relevance, pioneered by Google’s PageRank, maybe the best-known example of algorithmic ranking.

Also ubiquitous are ranking algorithms inside recommender systems. Given an individual’s behavior (browsing history, rating history, purchase history and so on), the idea is to rank the huge universe of things (e.g., books, movies, music) out there based on likely appeal to the individual and show the top-rankers. If you are an Amazon or Netflix customer, you have doubtless been at the receiving end of these ranked recommendations for books and movies that you may find of interest.  Plenty of complex and occasionally elegant math goes into quantifying and predicting “likely appeal” (Netflix Prize winning approach).

Despite its age, recommendation ranking is far from mature and different flavors of recommender systems are popping up every day. Just last week, BusinesWeek had a story on The Filter, a new recommendation ranking system that is allegedly leaving the other approaches in the dust (aside: One of the founders of The Filter is Peter Gabriel, legendary musician and member of Genesis, one of my favorite rock bands).

So far, I have listed “old” examples of ranking: web pages, books, movies, and music.  But  recently, I came across something new: SpotRank.

Skyhook Wireless, the company that provides location information to Apple devices (when you fire up Google Maps on your iPhone, your exact location is pinpointed using a combination of GPS information and Skyhook’s wifi database – details) announced SpotRank a few months ago.

By tracking the number of “location hits” their servers receive from Apple devices, Skyhook can determine which spots are popular and when they are popular. They capture this in the form of a popularity score and, as the name suggests, SpotRank ranks locations by their popularity score.

Next time you are in a strange part of town, have time to kill and are looking for popular spots, maybe SpotRank can help you (at least if you like hanging out with Apple fans).

Now that places are being ranked, what’s next? Ranking people?

It is already being done. Heard of UserRank?

UserRank was created by Nextjump, a NYC-based company that runs employee discount and reward programs for 90,000 corporations, organizations and affinity groups. Next Jump connects 28,000 retailers and manufacturers to the over 100 million consumers who work in the companies in its network, typically getting the merchants to offer deep discounts.

NextJump calculates a UserRank for every one of the 100m consumers in its database.

The more a user shops on our network, the higher their UserRank™ will be. Users with high UserRank™ are more likely to spend and are typically your best customers.

NextJump creates value by allowing retailers/merchants to use UserRank in offer targeting. For instance, an offer can be targeted only to consumers with a minimum UserRank.

I wonder what my UserRank is?

My final example is from the field of drug discovery. In a recent article, MIT News describes fascinating work done by researchers at MIT and Harvard on applying ranking algorithms to this area.

The drug development process typically starts with identifying a molecule that’s associated with a disease. Depending on the disease, this “target” molecule either needs to be suppressed or promoted. A drug that’s successful in treating the disease is a chemical (which, of course, is just another molecule) that suppresses or promotes the target molecule without causing bad side-effects.

How is such a drug found? Over the years, researchers have amassed a large catalog of chemicals that can help suppress or promote target molecules. From this library, drug developers find the most promising ones to use as drug candidates for further testing and clinical trials. Unfortunately,

majority of drug candidates fail — they prove to be either toxic or ineffective — in clinical trials, sometimes after hundreds of millions of dollars have been spent on them. (For every new drug that gets approved by the U.S. Food and Drug Administration, pharmaceutical companies have spent about $1 billion on research and development.) So selecting a good group of candidates at the outset is critical.

This sounds like a ranking problem: given a target molecule, rank  the chemicals in the database according to their likely effectiveness in being a viable drug for the chosen target.

The drug companies weren’t slow to recognize this, of course. They have been using machine-learning algorithms since the 90s with some success. However, the MIT-Harvard researchers showed that a

rudimentary ranking algorithm can predict drugs’ success more reliably than the algorithms currently in use.

What was the key idea?

At a general level, the new algorithm and its predecessors work in the same way. First, they’re fed data about successful and unsuccessful drug candidates. Then they try out a large variety of mathematical functions, each of which produces a numerical score for each drug candidate. Finally, they select the function whose scores most accurately predict the candidates’ actual success and failure.

The difference lies in how the algorithms measure accuracy of prediction. When older algorithms evaluate functions, they look at each score separately and ask whether it reflects the drug candidate’s success or failure. The MIT researchers’ algorithm, however, looks at scores in pairs, and asks whether the function got their order right.

(italics mine)

Rather than scoring each drug candidate in isolation and then ranking them all, the key idea was to build pairwise ranking into the construction of the matching algorithm itself.

As the data deluge gets larger and larger, finding information most relevant to one’s needs (be they mundane needs like in shopping or profound needs like in drug discovery) gets harder and harder. Perhaps this is why we are seeing ranking algorithms everywhere.

Have you seen any interesting examples of algorithmic ranking at work? Please share in the comments.

(HT to Karan Singh and Florent De Gantes for making me aware of the MIT News article and NextJump, respectively)

Share/Bookmark

Selling Your Home? Exploit the Price Precision Effect!

I was leafing through a recent issue of the journal Marketing Science over the weekend and came across an article titled “The Price Precision Effect“. According to the article abstract, the authors found that, in the US residential housing market,

precise prices are judged to be smaller than round prices of similar magnitudes. For example, participants in this experiment incorrectly judged $395,425 to be smaller than $395,000.

I was intrigued! I have seen academic work on the effect of price endings, magic prices etc. in retail stores (example – behind Harvard Business Review paywall, PDF of the HBR article on Oregon State website) but I had not come across research on the psychology of price perception on big-ticket items like homes.

I delved into the details of the study that led to the finding cited above but came away disappointed.

In the study, the authors used university students in a laboratory setting, rather than actual home buyers or sellers. Further, prices were shown to the participants in such a way that each precise price-list price pair wasn’t shown to the same individual. As a result, the comparison between a precise price and its round price sibling was done indirectly across all the individuals. In short, the study setting was a bit too far from the real-world for me to take the finding seriously.

I scanned the other studies described in the article (there are five in total) and found the following in Study 5:

we collected data from actual real estate transactions and tested whether the precision or roundness of list prices influence the magnitude of the sale prices.

Actual real estate transactions. That sounded promising. What did they find?

buyers pay higher sale prices when list prices are more precise.

This is interesting and potentially useful. Just by making the list price look precise, the buyer’s willingness-to-pay goes up. How high?

consider two houses in Long Island with the same zip code and with the same number of rooms and other features; one has a list price of $485,000 and the other has a more precise list price of $484,880. Our results suggest that the house with the more precise list price will sell for about $1,200–$1,450 more.

Not huge but since the effort involved in making a price look precise is close to zero, the ROE (Return on Effort) is very high.

How exactly did the authors quantify this effect?

To assess the effect of price precision on buyer behavior, we regressed the sale price on each of our three measures of price precision.

The authors measure price precision in several different ways (e.g., number of ending zeros in the price, a 0-1 dummy variable that indicates if the price had 3 ending zeros or not) and the results are consistent across these runs.

Comfortingly, the authors controlled for a number of other variables in the regression.

We also controlled for other factors that may be correlated with both the precision of the list price and the amount of the sale price. These other factors can be broadly grouped into four categories: property-specific, agent-specific, time-specific, and market-specific.

For example, the property-specific variables included

… square footage, number of bedrooms, number of bathrooms, age of the house, as well as dummies for house style, type of heating system, etc.

The other categories were similarly represented.

Overall, I am inclined to believe this finding. While it is not an experimental study, it does use actual real-estate transactional data, carefully controls for confounding variables, and identifies an effect that doesn’t seem outlandishly large.

Best of all, it is easy to put into practice: the next time you put your home on the market, make sure the list price doesn’t have three ending zeros!

Smarter Cruise Control With Analytics

As readers of this blog know, I am always on the lookout for examples of Monday Morning Analytics in action. I stumbled on an unusual and neat example recently.

I was in Chicago last week to give a talk on analytics at Navteq, possibly the world’s largest provider of mapping data and related services. I heard that Navteq map data is used 100m times a day; for example, if you use a Garmin GPS device or a mapping application on a Nokia phone, you are using Navteq data.

I had several interesting conversations about how location data can be profitably used in a variety of contexts, especially in retailing. I heard some great examples of creative and clever location-based services that are likely to appear in the next couple of years, particularly on mobile phones (the marriage of location data with mobile phones has already produced interesting progeny like Foursquare and Gowalla). But what caught my attention was an example that had nothing to do with mobile phones. It involves the cruise-control system in trucks!

All trucks have cruise control. When a truck driver is on an interstate highway and turns on cruise control, the system maintains the desired speed, accelerating and braking as needed.

But this sort of simple cruise control mechanism is not particularly fuel-efficient. It will consume a lot of gas to accelerate up a small hill (since it is trying to be at the desired speed) and then waste all that kinetic energy by braking while coming down the hill on the other side (since it doesn’t want to exceed the desired speed).

So far so good. Then, somebody, somewhere asked this question:

“Most trucks have GPS with the underlying map database on-board. From the map data, we know what’s ahead on the road. We know the ups-and-downs of the terrain and curves in the road. Why can’t we use this knowledge of what lies ahead to make the cruise control smarter?”

Brilliant!

They acted on this insight and created a smarter cruise-control system with “analytics inside”. This system uses the detailed map data to accelerate and brake in such a way that fuel consumption is minimized. When a hill is approaching, the system will not accelerate as much as before since it knows it will be going downhill soon and will have plenty of kinetic energy to hit the desired speed. When a curve is approaching, the system will take its foot off the gas pedal and slow down rather than wait for the driver to hit the brakes (this, of course, is a great safety feature as well).

I don’t have data on the number of miles traveled annually by freight trucks but I am sure it is not a small number. Making those trucks a tad more fuel-efficient is certain to have a big positive impact on both operating costs and the environment.

In my opinion, this is a neat example of Monday Morning Analytics. The system uses data to make a better decision (as opposed to simply identifying an “insight”). In fact, it goes one step further since it executes the better decision automatically without consulting the human decision-maker.

All the key ingredients of a modern decision-support system are present:

  • data: the truck’s precise location (thanks to the GPS) and the detailed map data. Note that simple map data isn’t enough. The data needs to include features such as terrain, road curves etc. Navteq has developed very cool technology to collect all this information and more.
  • prediction: the detailed map data is used to “predict” what lies ahead. Strictly speaking, they are not predicting as much as looking up the relevant data but the notion of using map data from the immediate horizon of the truck to project fuel-consumption and how it changes with different accelerate/brake decisions feels like predictive modeling.
  • optimization: the system finds the set of accelerate/brake/coast decisions that minimize fuel consumption while honoring the driver’s desired speed constraint. Textbook definition of optimization.

Nicely done!

Saving Lives With Analytics

Fortune has a brief article on aneurysm-spotting analytic software developed by IBM in collaboration with Mayo Clinic (HT to Satish Bhat for bringing this article to my attention).

To help in their aneurysm hunt, radiologists at Mayo Clinic use special software developed with IBM that analyzes a three-dimensional brain scan. Computer algorithms process information in the images, pick out abnormal areas where fragile blood vessels might be hiding, and flag the potential trouble spots for Mayo doctors. So far the results are promising. In trials the software found 95% of aneurysms; a typical radiologist would have found 70%.

95% vs 70%. How many lives saved as a result? I couldn’t find anything in the article on this question so I did some Googling.

Here’s what I found:

perhaps 25,000 to 50,000 people a year in the U.S. have a brain hemorrhage caused by a ruptured aneurysm.

Of these 25,000-50,000 people,

One-third to nearly half of patients have minor hemorrhages or “warning leaks” that later lead to a severe devastating brain hemorrhage days later.

So 8,000-25,000 people come in with a “warning leak”. Every one of their brain scans is presumably looked at by a radiologist. According to the Fortune article, radiologists have only a 70% success rate so let’s assume that 30% of the scans (i.e. 2,500 to 7,500 people) are mistakenly thought to be normal and, therefore, left untreated. They return days later with a burst aneurysm. What happens next?

The overall death rate once the aneurysm ruptures is about 40%

So, between 1000-3000 patients will die because the aneurysm wasn’t caught during the first visit.

Now, let’s look at how the analytic software will perform. According to Fortune, the software yields a 95% success rate so 5% of the scans (i.e. 400 to 1200 people) will be mistakenly thought to be normal and left untreated. Of these patients, between 160-480 patients will die (using the same 40% death rate as before).

Incremental lives saved? Between 800-2500 patients annually. Wonderful! Kudos to IBM and Mayo.

Here’s a little (hopefully) self-explanatory graphic. The blue box represents the incremental lives saved by the software; the red represents the lives that could be saved if the software’s accuracy goes to 100%.

p.s. I realize that numerous assumptions have been made in this back-of-the-envelope assessment. Feel free to criticize and improve. I just wanted to get a quick sense for how many lives would be impacted.

Factoids, Stories and Insights

Recently, The Economist had a special report titled “Data, data everywhere“. The report examines the rapid increase in data volumes and what the implications are. The report got the attention of the blogosphere (example) and I recommend taking a look if you haven’t already.

When I read articles like these, I try to extract three categories of “knowledge” for future use: factoids, stories, and insights.

  • Factoids are simply data points that I feel might come in handy someday
  • Stories are real-world anecdotes. The most memorable ones have an “aha!” element to them.
  • Insights are observations (usually at a higher level of abstraction than stories) that make me go “I never thought of that before. But it makes total sense.”

Think of this crude categorization as my personal approach to dealing with information overload. Of course, there’s a fair amount of subjectivity here: what I think of as an insight may be obvious to you and vice-versa.

So what did I make of The Economist article? There were numerous factoids that I cut-and-stored away (too many to list here but email me if you want the list), a few memorable stories, and a couple of insights.

Let’s start with the stories.

In 2004 Wal-Mart peered into its mammoth databases and noticed that before a hurricane struck, there was a run on flashlights and batteries, as might be expected; but also on Pop-Tarts, a sugary American breakfast snack. On reflection it is clear that the snack would be a handy thing to eat in a blackout, but the retailer would not have thought to stock up on it before a storm.

Memorable and concrete. Neat.

Consider Cablecom, a Swiss telecoms operator. It has reduced customer defections from one-fifth of subscribers a year to under 5% by crunching its numbers. Its software spotted that although customer defections peaked in the 13th month, the decision to leave was made much earlier, around the ninth month (as indicated by things like the number of calls to customer support services). So Cablecom offered certain customers special deals seven months into their subscription and reaped the rewards.

Four months before the customer defected, early-warning signs were beginning to appear. Nice but not particularly unexpected.

Airline yield management improved because analytical techniques uncovered the best predictor that a passenger would actually catch a flight he had booked: that he had ordered a vegetarian meal.

Hey, I knew this all along! Over 20 years, I have ordered vegetarian meals almost every time and have almost never missed a flight.

Just kidding. This came out of left field, I have never seen it before. While the claim that airline yield management improved substantially due to this single discovery feels like a stretch, the story is certainly memorable.

Sometimes those data reveal more than was intended. For example, the city of Oakland, California, releases information on where and when arrests were made, which is put out on a private website, Oakland Crimespotting. At one point a few clicks revealed that police swept the whole of a busy street for prostitution every evening except on Wednesdays, a tactic they probably meant to keep to themselves.

Worry-free Wednesdays! Great story, difficult to forget.

Let’s now turn to the two insights that stood out for me.

a new kind of professional has emerged, the data scientist, who combines the skills of software programmer, statistician and storyteller/artist to extract the nuggets of gold hidden under mountains of data.

This wasn’t completely new to me (I have friends whose job title is “Data Scientist”) but seeing the sentence in black-and-white crystallized the insight for me and made me appreciate the power of the trend. Particularly the point that the data scientist needs to be at the intersection of programming, stats and story-telling.

As more corporate functions, such as human resources or sales, are managed over a network, companies can see patterns across the whole of the business and share their information more easily.

What the author means by “managed over a network” is “managed in the cloud”. In my experience, data silos are all too common and this often leads to decisions being optimized one silo at a time, even though optimizing across silos can produce dramatic benefit.

I had not appreciated that, as data for more and more business functions gets housed in the cloud, data silos will naturally disappear and it will become increasingly easier to optimize across functions.

Well, that was what I gleaned from the article. If you “extract knowledge” in a different way than factoids/stories/insights, do share in the comments – I would love to know.