How to read without slipping into “check the box” mode

When I come across interesting-sounding long-form articles, blog posts etc., I save them to read later. When I am ready to read, I usually just pick the one that looks the most interesting from the lot and start reading it.

But I have noticed that this often puts me in a frame of mind where I find myself reading impatiently. I want to get to the end of the article fast so that I can “check it off” as done and move to the next one.

This is not only unpleasant but it also defeats the whole point of reading the article. I want to savor it and extract from it things that are useful or insightful or whatever.

Why does this happen?

My theory is that when I see the long list of unread articles, my brain gets very uncomfortable and shifts into “let’s crush that list” mode. It forces me to read faster. It tries to maximize articles read rather than insights gained.

If this theory is true, how to solve this problem?

This is my current solution: I select the next article to read without looking at the list. I literally randomly click on an article link without looking at the screen*.

I am happy to report that this “one weird trick” 🙂 works. It has been quite effective in keeping my brain in the right mode.

It is a bit strange that it even helps since I obviously know that there are lots of unread articles in my stack. But, somehow, not seeing the long list of unreads when picking the next thing seems to make a difference.

Perhaps it reminds my brain that there are an effectively infinite number of articles out there and trying to read everything is futile anyway? Who knows.

All this said, my brain isn’t 100% happy with this approach. It keeps reminding me that a randomly chosen article is very unlikely to be the best one in the pile so I am not maximizing value gained per unit of reading time.

Fair enough, but if I am reading the best article badly, am I really maximizing value? Also, I can’t easily** pick the best article without looking at the list which will put me back into “check the box” mode.

Anyway, it has only been a few weeks and who knows if this will continue to work. Still, I am happy with the results so far. Fingers crossed.

Do you have the same problem? How have you tried to solve it? Please share in the comments.


*The Reading List 2 chrome extension has a convenient ‘pick a random item’ feature but I wish the native Chrome Reading List side-tab – which I use heavily across all my devices – had it.

**a personalized next-best-article recommendation feature in the Chrome Reading List side-tab would be nice.

6 steps for leading successful data science teams

An increasing number of organizations are bringing data scientists on board as executives and managers recognize the potential of data science and artificial intelligence to boost performance. But hiring talented data scientists is one thing; harnessing their capabilities for the benefit of the organization is another.

Supporting and getting the best out of data science teams requires a particular set of practices, including clearly identifying problems, setting metrics to evaluate success, and taking a close look at results. These steps don’t require technical knowledge and instead place a premium on clear business thinking, including understanding the business and how to achieve impact for the organization.

Data science teams can be a great source of value to the business, but failing to give them proper guidance isn’t a recipe for success. Following these steps will help data science teams realize their full potential, to the benefit of your organization.

Continue reading: https://mitsloan.mit.edu/ideas-made-to-matter/6-steps-leading-successful-data-science-teams

From Prediction to Action — How to Learn Optimal Policies From Data

Photo by Vladislav Babienko on Unsplash

If you know how to build predictive models, you can leverage this knowledge to learn optimal policies – rules that tell you the best way to act in various situations – directly from data.

Policy optimization problems are very common in the business world (e.g., arguably, every personalization problem is a policy optimization problem) and knowing how to solve them is a data science superpower.

The following series of blog posts aims to give you that superpower 🙂

  • In Part 1, I motivate the need to learn optimal policies from data. Policy optimization covers a vast range of practical situations and I briefly describe examples from healthcare, churn prevention, target marketing and city government.
  • In Part 2, I walk through how to create a dataset so that it is suited for policy optimization.
  • In Part 3, I describe a simple (and, in my opinion, magical) way to use such a dataset to estimate the effectiveness of any policy.
  • In Part 4, I show how to use such a dataset to find an optimal policy.

Happy learning!

Lessons from a Deep Learning Master

Photo by Valentin B. Kremer on Unsplash

Yoshua Bengio is a Deep Learning legend and won the Turing Award in 2018, along with Geoff Hinton and Yann LeCun.

In this short post, I want to highlight for you some clever things that Yoshua and his collaborators did to win a Machine Learning competition from a field of 381 competing teams. Perhaps these ideas will be useful in your own work.

In a world where powerful Deep Learning frameworks (e.g., TensorFlow, PyTorch) are a free download away, their competition-winning approach demonstrates nicely that your edge may come from how well you model the specifics of your problem.

Read the rest of the post on Medium.

How to Use Causal Inference In Day-to-Day Analytical Work (Part 2 of 2)

In Part 1, we looked at how to use Causal Inference to draw the right conclusions — or at least not jump to the wrong conclusions — from observational data.

We saw that confounders are often the reason why we draw the wrong conclusions and learned about a simple technique called stratification that can help us control for confounders.

In this article, we present another example of how to use stratification and then consider what to do when there are so many confounders that stratification becomes messy.

Read the rest of the post on Medium.