How to read without slipping into “check the box” mode

When I come across interesting-sounding long-form articles, blog posts etc., I save them to read later. When I am ready to read, I usually just pick the one that looks the most interesting from the lot and start reading it.

But I have noticed that this often puts me in a frame of mind where I find myself reading impatiently. I want to get to the end of the article fast so that I can “check it off” as done and move to the next one.

This is not only unpleasant but it also defeats the whole point of reading the article. I want to savor it and extract from it things that are useful or insightful or whatever.

Why does this happen?

My theory is that when I see the long list of unread articles, my brain gets very uncomfortable and shifts into “let’s crush that list” mode. It forces me to read faster. It tries to maximize articles read rather than insights gained.

If this theory is true, how to solve this problem?

This is my current solution: I select the next article to read without looking at the list. I literally randomly click on an article link without looking at the screen*.

I am happy to report that this “one weird trick” 🙂 works. It has been quite effective in keeping my brain in the right mode.

It is a bit strange that it even helps since I obviously know that there are lots of unread articles in my stack. But, somehow, not seeing the long list of unreads when picking the next thing seems to make a difference.

Perhaps it reminds my brain that there are an effectively infinite number of articles out there and trying to read everything is futile anyway? Who knows.

All this said, my brain isn’t 100% happy with this approach. It keeps reminding me that a randomly chosen article is very unlikely to be the best one in the pile so I am not maximizing value gained per unit of reading time.

Fair enough, but if I am reading the best article badly, am I really maximizing value? Also, I can’t easily** pick the best article without looking at the list which will put me back into “check the box” mode.

Anyway, it has only been a few weeks and who knows if this will continue to work. Still, I am happy with the results so far. Fingers crossed.

Do you have the same problem? How have you tried to solve it? Please share in the comments.


*The Reading List 2 chrome extension has a convenient ‘pick a random item’ feature but I wish the native Chrome Reading List side-tab – which I use heavily across all my devices – had it.

**a personalized next-best-article recommendation feature in the Chrome Reading List side-tab would be nice.

Building startups in an exit-friendly way

I recently spoke at StartMIT, MIT’s student entrepreneurship bootcamp. The topic was exit strategies and how to build startups in an exit-friendly way.

Slides below. I had quite a bit of content in the talk-track that’s not reflected in the slides so if you have questions, please feel free to ask in the comments. Slide #41 (“How to talk to an acquirer”) evoked the most interest both during the talk and in the Q&A afterwards.

[1-17-2017 update: not everyone is able to view the embedded slides below for some reason, so here’s a downloadable/printable PDF version: Exit Strategies PDF]

AlphaGo is Here. What’s Next?

One of the most dramatic events in 2016 was the triumph of Google DeepMind’s AlphaGo AI program against Lee Sedol of South Korea, one of the world’s top Go players.

This was a shock to many. Chess fell to AI many years ago but Go was thought to be safe from AI for a while and AlphaGo’s success set off a flurry of questions. Is AI much further along than we think? Are robots with human-level intelligence just around the corner?

Experts have lined up on both sides of these questions and there’s no shortage of perspectives. I wanted to share two that particularly resonated with me.

In an Edge interview on big data and AI (which is a great read in its entirety, btw), Gary Marcus of NYU highlights a key requirement of systems like Google DeepMind’s Atari AI and AlphaGo AI.

You’d think if it’s so great let’s take that same technique and put it in robots, so we’ll have robots vacuum our homes and take care of our kids. The reality is that in the [Google DeepMind] Atari game system, first of all, data is very cheap. You can play the game over and over again. If you’re not sticking quarters in a slot, you can do it infinitely. You can get gigabytes of data very quickly, with no real cost.

If you’re talking about having a robot in your home? – I’m still dreaming of Rosie the robot that’s going to take care of my domestic situation – you can’t afford for it to make mistakes. The DeepMind system is very much about trial and error on an enormous scale. If you have a robot at home, you can’t have it run into your furniture too many times. You don’t want it to put your cat in the dishwasher even once. You can’t get the same scale of data.

This is certainly true in my experience. Without lots and lots of data to learn from, the fancy machine learning/deep learning stuff doesn’t work as well (this is not to say that data is everything; many math/CS tricks contributed to the breakthroughs but lots of data is a must-have).

So is that it? In situations where we can’t have “trial-and-error on an enormous scale”, are we basically stuck?

Perhaps not. Machine learning researcher Paul Mineiro acknowledges this …

In the real world we have sample complexity constraints: you have to perform actual actions to get actual rewards.

… and suggests a way around it.

However, in the same way that cars and planes are faster than people because they have unfair energetic advantages (we are 100W machines; airplanes are much higher), I think “superhuman AI”, should it come about, will be because of sample complexity advantages, i.e., a distributed collection of robots that can perform more actions and experience more rewards (and remember and share all of them with each other).

AIs remembering and sharing with each other. That’s a cool idea.

Perhaps we can’t reduce the total amount of trial-and-error necessary for AIs to learn, but maybe we can “spread the data-collection pain” to thousands of AIs, learn from the pooled data, and push the learning back out to all the AIs and run this loop continuously. If my robot bumps into the furniture, maybe yours won’t have to.

Come to think of it, this “remembering and sharing with each other” is one of the arguments that have been put forth for how homo sapiens evolved from their humble beginnings to today where they can build things like AlphGo.

One More Reason to Prefer Simple Models

When building models for classification and regression, the question arises often: Go with a simple model that’s easy to understand but doesn’t have the highest accuracy? Or go with the model that’s impressively complex but much more accurate?

The needs of the situation often force one choice over another. If explainability is important, the simple model may win. If black-boxes are fine and it is all about accuracy, the complex model may be chosen. If the accuracy is roughly the same, Occam’s Razor may point to the simpler model.

I recently came across a different reason for preferring the simpler model.

In Classifier Technology and the Illusion of Progress, David Hand argues that the accuracy advantage of the complex model may not persist for long [note that he refers to the data used to train and validate the model as the “design distribution”]

The performance difference between two classifiers may be irrelevant in the context of the differences arising between the design and future distributions … more sophisticated classifiers, which almost by definition model small idiosyncrasies of the distribution underlying the design set, will be more susceptible to wasting effort in this way: the grosser features of the distributions (modeled by simpler methods) are more likely to persist than the smaller features (modeled by the more elaborate methods).

The apparent superiority of the more sophisticated tree classifier over the very simple linear discriminant classifier is seen to fade when we take into account the fact that the classifiers must necessarily be applied in the future to distributions which are likely to have changed from those which produced the design set … the simple linear classifier captures most of the separation between the classes, the additional distributional subtleties captured by the tree method become less and less relevant when the distributions drift. Only the major aspects are still likely to hold.

Data scientists are often cautioned that future data may be different from the data used for training the model. This advice isn’t new.

What I found interesting was the notion that, even when the data changes in the future, its major features are likely to hold up for longer or change more slowly than its minor features. And this, in turn, favors simpler models since they tend to use the major features.

 

 

 

Can Animals Perceive Human Relationships?

From the thought-provoking Beyond Words by Carl Safina:

When one individual knows another’s relationship to a third, it’s called “understanding third party relationships.”

Primates understand third-party relationships … and so do wolves, hyenas, dolphins, birds of the crow family, and at least some parrots.

A parrot, say, can act jealous of its keeper’s spouse. When the vervet monkeys that are common around camp hear an infant’s distress call, they instantly look to the infant’s mother. They know exactly who they and everyone else are. They understand precisely who is important to whom.

When free-living dolphin mothers want young ones to stop interacting with humans, the mothers sometimes direct a tail slap at the human who has the baby’s attention, signaling, in effect, “End the game; I need my child’s attention.”

When the dawdling youngsters are interacting with dolphin researcher Denise Herzing’s graduate assistants, their mothers occasionally direct these – what should we call them: reprimands?- at Herzing herself. This shows that dolphins understand that Dr. Herzing is the leader of all the humans in the water.

For free-living creatures to perceive rank-order in humans – just astonishing.

I am only about a third through the book but it is already changing the way I interact with my 5-year-old Labrador Retriever Google.