Articles I read this week, about procrastination, high-output management, technical research, and more.

Why procrastination is about managing your emotions, not time

Author: Christian Jarrett

How I found this: a friend sent it to me

Summary: the conventional wisdom about procrastination is that it’s a matter of time management, or that if you get better at scheduling and estimating the time that different things will take, then you will procrastinate less. More recent research is suggesting that it’s more about managing our emotions, though, and that the act of procrastinating makes us feel worse, so we cope with that by doing other things to make us feel better.

Of course, this is a vicious cycle. While procrastination is an effective distraction, it can lead to guilt that compounds the initial stress, and lead to the feeling of hours having escaped from you. The flip side is that because it’s an emotional regulation issue, research on cognitive behavioral therapy can help us to address it. One offshoot is “acceptance & committment therapy” (ACT):

ACT teaches the benefits of ‘psychological flexibility’ – that is, being able to tolerate uncomfortable thoughts and feelings, staying in the present moment in spite of them, and prioritising choices and actions that help you get closer to what you most value in life.

Thoughts: this is absolutely consistent with my experience. I love the idea of ACT, which to me is related to the ideas behind mindfulness meditation: noticing and accepting your emotions, but not letting them govern your actions. Combating procrastination for me is primarily a function of overcoming the lurking anxiety that makes me put something off. Starting to do anything makes it easier to continue, and so I’ve also found it effective to start the easiest thing first.

FiveThirtyEight: the state of the polls

Author: Nate Silver

Summary: this article brings together lots of research about the accuracy & state of election polls since the 2016 Presidential election. Highlights include:

  • Polls have been “quite accurate” in post-2016 elections, saying that the weighted average error in the 21 days before elections was only 5 points (compared to the overall average of 5.8 points)
  • Polls call the right winner 79% of the time; this is ~77% since 2010, and upwards of 80% before then, but this due to there being more close elections recently
  • In races with a margin smaller than 3 points, polls identify the winner 58% of the time; better than random, but not much.
  • Polling bias is not consistent from cycle to cycle; the 2015 - 2016 cycle had a 3 point bias towards Democrats, but 2017 - 2019 is 0.3 points to Republicans. More generally, whatever bias is present in one cycle isn’t present in the next.

Silver continues talking about their pollster ratings and different ways that they detect “herding” (when pollsters massage their results to match those of other pollsters). There are other methodological changes that they’ve made to their pollster ratings, which are not as interesting to me but certainly critical to the accuracy of their election models.

High Output Management for (non-managing) Tech Leads

Author: Andrew Hao

Summary: this article translates some ideas from Andrew Grove’s “High Output Management” into the tech lead space (for people who don’t manage). Most of the ideas marry the fact that tech leads are still leaders (with context & implicit authority), but not managers (so their feedback may be lower stakes).

The author also brings over the idea of leverage:

The leverage from a team lead comes from the strength of the connective tissue they are building within their team and between other teams … High leverage activities involve communication and coordination between teams. They often help communicate work streams, set expectations to other stakeholders, or do the plain, boring work of defining work clearly and explicitly.

Furthermore, Hao discusses tech leads conducting 1:1s precisely because they don’t have direct managerial authority, but can influence, mentor, and coordinate people nonetheless. Other functions include operational review meetings, inter-team knowledge sharing, and decision making.

Thoughts: I appreciate this article as someone who is considering different directions that I want my career to go in. It sounds like it might be appealing to be in a tech lead (or data science lead or research lead) position some day, and posts like this help me to understand some of the differences between them and more traditional people managers.

Hao’s suggestion that some of the uniqueness of tech leads comes precisely from a lack of managerial authority is also interesting. A good tech lead might work with a manager to help team members to grow in technical ways, or otherwise come up with coaching or mentorship strategies that would typically be reserved for a manager.

Technical Research and Preparation

Author: Keavy McMinn

Summary: this article describes strategies for doing technical research—identifying the right problems and solutions, writing exploratory code, and preparing for new work. The author characterizes the operant questions as:

  • What is the problem we’re trying to solve?
  • How could we approach solving it?
  • How can we prepare for minimal negative consequences?

The “how could we approach solving it” question is best answered by a “spike,” which I’ve only used once (when I was a software intern at Qualtrics). In a spike, one dives deep into a problem to understand it and investigate the feasibility of different approaches. Spikes are never merged—only closed—so they don’t have to be anything approaching production code.

This is a great quote, too:

This work is time consuming, it can be tiresome, and is likely to involve spreadsheets. There’s something comforting in the fact that there’s nothing magical here, just solid preparation. However, it is invaluable in helping build the most robust and appropriate solutions, and saves countless time and effort through steering our work in a good direction early on. The cost of change is at its lowest during the research phase.

Thoughts: sometimes, I feel like my (actual research-related) job is one big spike. I love the idea of diving deep into something for a few days to figure out the possible approaches to a problem, but when my entire job is “how do we solve this as-of-yet unsolved research question,” it doesn’t feel all that different from the day-to-day.

That gives me an idea, though, about being more methodical with my research. I vary between investigating the feasibility of new ideas, actually trying to implement a new method, evaluating something I did, or studying new data. Trying to timebox these and estimate their time up front might help me to better manage my time here.

Other, shorter articles

Sample size determination using historical data and simulation is a simulation-based approach for figuring out how large an experiment’s sample size needs to be in order to detect a certain effect. One must specify the levels of statistical & practical significance, and the effect size, and simulation does the rest. This is useful for cases where historical data is available (common) or where the assumptions of normal statistical tests are not met (very common!).

“Knowledge Production” vs. the Conservation and Growth of Knowledge is a short post about the purpose of university research. The author argues that universities were never intended to “produce” or even to innovate; rather, they exist to conserve what we know by also learning more. “The universities were there to pass what we already know on to those who are capable of knowing it but do not yet know it.” This, plus the cultivation of curiosity in generations of students, is the function of universities.

High Output Management for (non-managing) Tech Leads is an article by Andrew Hao translating some ideas from Andrew Grove’s “High Output Management” into the tech lead space (for people who don’t manage). Most of these


Do you work at a tech company? by Will Larson is yet another useful post from his blog. His definition, based off Camille Fournier’s, is:

Paraphrasing in my own words: in the context of your job hunt, a technology company is one with a strong engineering culture and empowered engineering leadership. Not the business model, not the legal definition, not “pure” innovation, but having a place at the table and being a table you want to sit at.

I love this—companies like Uber or Lyft are clearly thought of as “tech companies,” but their business models are arguably only enabled by tech. But, to me, and as Larson points out, the distinction is not one of business: it’s one of culture. A tech company has a strong tech culture, and while that’s kind of a chicken-and-egg situation, as a job seeker I can’t change the culture, so the existing culture is what counts.


How big technical changes happen at Slack is a blog post about how they approach adopting new technologies, and how they strike the balance between catching revolutions (like React) without chasing fads. The answer is that they let teams experiment with new tech, and cut off failed experiments early. They also force the engineers hoping to push new tech to be the change agents for it, rather than relying on a centralized “we’re going to use React now.” This is almost product work, instead of engineering, but it forces the selection of “Stuff that Works.”

This is actually pretty consistent with my experience at Nielsen data science; new tech doesn’t come down from above, but rather has to move through data science organically. It’s definitely slower than the alternative, but it generally makes for smoother transitions.