How to Use Causal Inference In Day-to-Day Analytical Work (Part 1 of 2)

Analysts and data scientists operating in the business world are awash in observational data. This is data that’s generated in the course of the operations of the business. This is in contrast to experimental data, where subjects are randomly assigned to different treatment groups, and outcomes are recorded and analyzed (think randomized clinical trials or AB tests).

Experimental data can be expensive or, in some cases, impossible/unethical to collect (e.g., assigning people to smoking vs non-smoking groups). Observational data, on the other hand, are very cheap since they are generated as a side effect of business operations.

Given this cheap abundance of observational data, it is no surprise that ‘interrogating’ this data is a staple of everyday analytical work. And one of the most common interrogation techniques is comparing groups of ‘subjects’ — customers, employees, products, … — on important metrics.

Shoppers who used a “free shipping for orders over $50” coupon spent 14% more than shoppers who didn’t use the coupon.

Products in the front of the store were bought 12% more often than products in the back of the store.

Customers who buy from multiple channels spend 30% more annually than customers who buy from a single channel.

Sales reps in the Western region delivered 9% higher bookings-per-rep than reps in the Eastern region.

Comparisons are very useful and give us insight into how the system (i.e. the business, the organization, the customer base) really works.

And these insights, in turn, suggest things we can do — interventions — to improve outcomes we care about.

Customers who buy from multiple channels spend 30% more annually than customers who buy from a single channel.

30% is a lot! If we could entice single-channel shoppers to buy from a different channel the next time around (perhaps by sending them a coupon that only works for that new channel), maybe they will spend 30% more the following year?

Products in the front of the store were bought 12% more often than products in the back of the store.

Wow! So if we move weakly-selling products from the back of the store to the front, maybe their sales will increase by 12%?

These interventions may have the desired effect if the data on which the original comparison was calculated is experimental (e.g., if a random subset of products had been assigned to the front of the store and we compared their performance to the ones in the back).

But if our data is observational — some products were selected by the retailer to be in the front of the store for business reasons; given a set of channels, some customers self-selected to use a single channel, others used multiple channels— you have to be careful.


Because comparisons calculated from observational data may not be real. They may NOT be a reflection of how your business really works and acting on them may get you into trouble.

How can we tell if a comparison is trustworthy?  Read the rest of the post on Medium to learn how.

Create a Common-Sense Baseline First

When you set out to solve a data science problem, it is very tempting to dive in and start building models.

Don’t. Create a common-sense baseline first.

A common-sense baseline is how you would solve the problem if you didn’t know any data science. Assume you don’t know supervised learning, unsupervised learning, clustering, deep learning, whatever. Now ask yourself, how would I solve the problem?

Read the rest of the post on Medium

I have data. I need insights. Where do I start?

This question comes up often.

It is typically asked by starting data scientists, analysts and managers new to data science. Their bosses are under pressure to show some ROI from all the money that has been spent on systems to collect, store and organize the data (not to mention the money being spent on data scientists).

Sometimes they are lucky – they may be asked to solve a very specific and well-studied problem (e.g., predict which customer is likely to cancel their mobile contract). In this situation, there are numerous ways to skin the cat and it is data science heaven.

But often they are simply asked to “mine the data and tell me something interesting”.

Where to start?

This is a difficult question and it doesn’t have a single, perfect answer. I am sure experienced practitioners have evolved many ways to do this. Here’s one way that I have found to be useful … (read the rest of the post on Medium)

Handy Command-Line One-liners for Starting Data Scientists

[6/5/2017 update: I was asked if I had a PDF version of the one-liners below. Here it is. Data-Science-One-Liners.pdf ]

Experienced data scientists use Unix/Linux command-line utilities (like grep, sed and awk) a great deal in everyday work. But starting data scientists, particularly those without programming experience, are often unaware of the power and elegance of these utilities.

When interviewing candidates for data scientist positions, I ask simple data manipulation questions that can be done with a command-line one-liner. But often the answer is “I will fire up R, import the CSV into a data frame, and then …” or “I will load the data into Postgres and then …”.

The command-line can be much simpler and faster, especially for getting large data files ready for consumption by specialized tools like R. For example, rather than try to load a million-row CSV into R and sample 10% of it, you can quickly create a 10% sample using this one-liner … (read the rest of the post on Medium )

Building startups in an exit-friendly way

I recently spoke at StartMIT, MIT’s student entrepreneurship bootcamp. The topic was exit strategies and how to build startups in an exit-friendly way.

Slides below. I had quite a bit of content in the talk-track that’s not reflected in the slides so if you have questions, please feel free to ask in the comments. Slide #41 (“How to talk to an acquirer”) evoked the most interest both during the talk and in the Q&A afterwards.

[1-17-2017 update: not everyone is able to view the embedded slides below for some reason, so here’s a downloadable/printable PDF version: Exit Strategies PDF]