Large language models are increasing in prevalence, and the field is marching steadily towards bigger ones still. This paper takes a step back to question the risks of these models. Can they be too big? (And yes—the emoji is part of the title!)
Hi, I'm Tushar.Thank you for visiting my website. Here, I post my thoughts on data science and HCI; summaries of papers I've read and talks I've watched; and reflections on other things that I read.
430 words self,
I note all of the reading that I do on a semi-regular basis. I compiled this partially as a reference for myself, and partially to point others to.
Explainable AI is often treated as an algorithmic problem, but this framing leaves a blind spot of how an AI system fits into an actual organization. This paper uses the idea of social transparency to motivate a new, more practical framework for thinking about explainability.
Measuring ‘engagement’ on social platforms is always going to be a proxy for an actual concept of value; a user engaging with something doesn’t mean they value it. This paper closes that gap, connecting engagement behaviors to value through a Bayesian network. The authors implement their approach on Twitter.
Talking to Judah in our reading club helped me to crystallize some of my thoughts about Ali Alkhatib & Michael Bernstein’s Street-Level Algorithms paper. This post explores these.
Street-level bureaucrats are the people making routine decisions for institutions—administrators, police, professors, and more. This work introduces street-level algorithms as an idea for algorithms that are tasked with filling the same role.
397 words what I read,
Two thoughtful articles this week: on how machine learning is going real time, and the problems with machine learning in medicine.
Fairness in machine learning is typically concerned with ideas like discrimination, disparate impact or treatment, or protected classes. This paper describes how the definitions being used in ML aren’t always compatible with definitions in the legal system.