Two thoughtful articles this week: on how machine learning is going real time, and the problems with machine learning in medicine.

Machine learning is going real-time by Chip Huyen is an in-depth discussion about real-time machine learning; there’s little information out there and even less consensus about what it means.

After talking to machine learning and infrastructure engineers at major Internet companies across the US, Europe, and China, I noticed two groups of companies. One group has made significant investments (hundreds of millions of dollars) into infrastructure to allow real-time machine learning and has already seen returns on their investments. Another group still wonders if there’s value in real-time ML.

The post covers online prediction and online training. Online prediction, while more common than online training, is still rarer than batch prediction (in the United States). Online training is virtually unheard of here, although part of this is because it’s much harder.

This was a great introduction, and made me think that if I didn’t want to be a researcher than I would focus more on machine learning engineering.


Medicine’s machine learning problem by Rachel Thomas discusses the unique risks in using machine learning to “help” with medicine. The core argument is that data and technology are not “inert”: they exacerbate existing imbalances of power.

Dr. Thomas notes five existing flaws in “ML for medicine” that the field must come to terms with:

  1. Existing data is flawed. Data collection is frequently fraught with bias (Fitbit heart rate monitors are less accurate on people of color; doctors often misattribute women’s pain as pychological).
  2. ML often centralizes power away from the people who are effected most. One example comes from an algorithm cutting health care for people in Arkansas with cerebral palsey—with no explanation or appeal.
  3. People’s experiences in medicine are often negative; ML systems must consider how they interact with an already-flawed system.
  4. “Expertise” is often considered “doctor knowledge,” but patient knowledge (lived experience) is expertise too; COVID long-haulers not being taken seriously is a salient example.
  5. Questions of bias and fairness often miss the point; power dynamics and inclusion of the people affected matter more.

This was great. I think the questions of how ML shifts power, instead of just how it’s biased, are particularly important. Medicine also provides an example for the idea that “lived experience is expertise”; this is sometimes controversial, but I believe it.