This week’s reading: why finishing a personal project is hard, a GPT-3 bot posing as a human, more findings from Cambridge Analytica, and an interview with Safiya Noble.

Motivation and why finishing a personal project is hard

Author: ONuallainc

I’m guilty of saying “I lost motivation” on personal projects that I’m working on. Countless other developers are, too. Why?

The author suggests it’s often because people underestimate the “number of hats” they need to wear in finishing the project. You might have some goals for your project (learn about X, build something using Y), but to make it done, there are other, often unrelated things you have to do, too.

(For my Pokemon Mystery Dungeon rescues project, the answer was “marketing.” Gross. This Reddit discussion has more experiences.)

They suggest tackling this through better planning—really have a sense for all the pieces you want to do—and a commitment to finishing in whatever form is most appropriate for you.


GPT-3 Bot Posed as Human on AskReddit for a Week from kmeme recounted a story of how a user using GPT-3 was rapidly responding to AskReddit questions for a week. The author of this post suspected that it was GPT-3, posted on its subreddit, and found that it was using a service called Philosopher AI.

This is wild. The account replied to questions on all kinds of topics, often replying with coherent, multi-paragraph responses. Some of the deepest-sounding phrases were “original,” too, in that they were not direct quotes found on Google.

It wasn’t stopped until the developer of the Philosopher AI service banned it. Platforms are not ready for this.


Cambridge Analytica sought to use Facebook data to predict partisanship for voter targeting, UK investigation confirms reports on a letter from the UK data watchdog on Cambridge Analytica. In short, they found that Cambridge Analytica overstated its capabilities and was not capable of anything that its competitors weren’t.

Per the ICO’s assessment, CA/SCL had been over-egging the depth of its people profiling — with the regulator saying it did not find evidence to back up claims in its marketing material that it had “5,000+ data points per individual on 230 million adult Americans”.

The data, while haphazardly managed, consisted of voter files, social media datasets, Experian data, and other readily purchaseable sources. They used generic machine learning algorithms to cluster people for targeting them with political ads. Nothing to see here.


Expert view: Algorithms of Oppression with Safiya Noble is an interview with the author of Algorithms of Oppression: How Search Engines Reinforce Racism. This book is already on my reading list, and it seemed like Dr. Noble summarized some of its main points; that search engines reinforce whatever the majority view around a topic is, and that a lot of the bandaid fixes that platforms have been applying do not address the underlying problem.

“For example, I wrote about what happened when a teenager in Baltimore, Kabir Alli, did a search on Black teenagers, and it surfaced all these criminal photos. Google fixed that. They tried to resolve it. And then about six weeks ago a news story broke that when you google four Black teenagers instead, you still get criminal images. So, the underling logics do not get fixed and have not been fixed.”

She stresses the value of an interdisciplinary approach; “hiring people with the requisite background in gender studies and ethnic studies is very important to inform policy.” There’s also a need for investment in libraries, education, public media, and health; institutions that serve everyone, she writes.