2 billion people use YouTube monthly. YouTube’s recommendation algorithm determines what people watch for more than 70% of the views. That's 700 million hours - or 1,000 human lifetimes - every single day.—algotransparency.org
I have been lecturing about racial bias in algorithms pretty consistently lately. Not necessarily in a didactic sense but revealed in the data questions I am asking and the insights deep interrogation and geospatial location data reveal. Panel discussions often anticipate an all-in approach to the wonders of AI. I am not completely sold.
Distortion can fool anybody. But once you realize you are the mark, what are you going to do about it? I recommend Algo Transparency. What does AI want you to see when you “innocently” visit social media platforms? Well. Now you can find out.
All of the major platforms are here but I am focusing on YouTube since I will be interviewed shortly about a tangential topic.
The top recommended video on September 1st garnered 2,737,573 views resulting in 157 recommendations per 10k viewers. These recommendations came from at least 219 different channels. The recommendation engine is the mirror.
Because I love maps, here is the YouTube map of topics.
AlgoTransparency.org is the mission of AI expert Guillaume Chaslot. The actual person that created the YouTube recommendation engine.
To understand how this is all possible requires a look at the legislative text of Section 230 of the Communications Decency Act of 1996. Here is the relevant section of the current version here:
Protection for “Good Samaritan” blocking and screening of offensive material
(1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2)Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
Basically the platforms aren’t responsible even though the text above is from an era that predates AI and the existence of social media platforms. The legislation specifically predates YouTube by at least 10 years.
These platforms are NOT mirroring back to us what we are asking for—they are amplifying noise and exhorting our attention with fake bots and weird avatars to predict what will grab our attention. Not only in the next 10 minutes but after that—and after that—to infinty.
I work in tech so I am not bashing technology. I am reminding us of the words of Edward O. Wilson, “Paleolithic emotions, medieval institutions, and godlike technology.” I wrote a piece with that title over on data & donuts.
The problem with these platforms is the business model and lack of responsibility.
So just to say and make sure that we're all with you here in the tech industry, this is not an anti-tech conversation. It's about what is this automated attention, hungry AI powered system doing to history, to world culture.
The problem occurs when they have a self dealing extractive business model that says instead of wanting just to help you with your goal, we really just want to suck you down the rabbit hole. And there's no reason why recommendations should be on by default. Like this is not a, you shouldn't be able to post ukulele videos or posts the health how to videos.
This is about why are we recommending things to people that systematically tilt in the more extremizing directions that we know are ruining society.—Tristan Harris
These are conversations we need to be having. Especially if we are working with data and helping to shape the future of an industry that cultivates insights and stories.
Be careful whose stories you are telling.