On work-life balance; writing simply; ethical issues with recommender systems; and trust and safety in products.

Proof our work-life balance is in danger (but there’s still hope) by Arik Friedman, on the Atlassian blog, shows a couple of compelling visualizations suggesting that people’s workdays have been getting longer since COVID started. The average workday in the United States was 30 minutes longer in April & May relative to four months prior. And in India, it was 32 minutes; in Israel, 47. This compares January / February to April / May; I’m curious about what those numbers look like today.

Over half of respondents said it’s harder now to maintain work-life boundaries than before the pandemic, and 23% reported thinking about work during their off-hours more than they used to.

I think this was a great example of a data science blog post. It showed clear, easy to understand findings. It was focused while still providing enough background on the data sources & limitations. And the visualizations were really compelling and told a whole story themselves.


How do you write simple explanations without sounding condescending? by Julia Evans gives some tips for explaining complicated topics. Julia Evans’ blog was one of the big inspirations for me to start writing regularly, so I’m always happy to read her advice.

I love “use some jargon to give the reader search terms,” because explanations without jargon have to stand alone and are harder to connect with other work out there. And “write (mostly) true explanations” helps by exposing gaps in my own knowledge and forcing me not to oversimplify something complicated.


How algorithms can learn to discredit the media by Guillaume Chaslot is a blog post about ethical issues in the YouTube recommendation algorithm, from an engineer who previously worked on it. The core idea is that if videos try to convince you that mainstream media is lying, then you’ll probably spend less time watching traditional media, and more watching YouTube.

But there’s more: if more people start watching anti-MSM content, then more people will also start creating it.

Essentially, content creators are rewarded with “free advertisement” when their message furthers AI goals.

I hadn’t thought of this point before; I previously saw recommender systems in terms of a two-actor feedback loop, the algorithm mediating interactions between viewers and videos. But creators are involved, too.


Trust and safety is not a product edge case by Ben Balter makes, essentially, exactly that point. It’s easy to dismiss user safety issues—harassment, violence, stalking, doxxing—as edge cases, which only affect a small number of users. When he had recently started on Trust & Safety at Github, he did just that (before being corrected).

This is a good perspective. I like this quote:

If you knew of a security vulnerability in your product, you wouldn’t deprioritize it because it had only been exploited “a few” times … Why would we protect our users’ data with better care than we’d seek to protect our users themselves?

Why do platforms treat people themselves as less important than the data they create?