OF#34 – Your Brain on Exercise, The Facebook Problem, and Waste as Art
What does exercise do to your brain? Is Facebook working to reduce division and extremism? And what does waste transformed into art look like?
Hi there, I’m Thomas Najar. Welcome to issue 34 of Open Frame.
The last time I checked, there was a big ship blocking the Suez Canal. Is it still there? Who knows.
Let’s dive in to the newsletter.
Your Brain on Exercise
Over the years, I’ve tried many forms of exercise, including yoga, running, swimming and cycling. Lately, I’ve settled on High Intensity Interval Training (HIIT). I love it because you get a great workout quickly without needing equipment.
I work out early in the morning, and I’m much more sharp, alert, relaxed and focused on workout days than rest days. Why is this? Because exercise is the best thing you can do for your brain.
The Problem with Facebook
I rarely use Facebook or other social media, and watching The Social Dilemma on Netflix only strengthened my aversion. There’s reason to believe that Facebook spreads information, amplifies extreme points of view, and sows division.
This isn’t news. Facebook has been under the gun to take action for some time. Are they taking responsibility and making changes? According to Karen Hao in an article for MIT Technology Review, it appears not.
By the time thousands of rioters stormed the US Capitol in January, organized in part on Facebook and fueled by the lies about a stolen election that had fanned out across the platform, it was clear from my conversations that the Responsible AI team had failed to make headway against misinformation and hate speech because it had never made those problems its main focus. More important, I realized, if it tried to, it would be set up for failure.
The reason is simple. Everything the company does and chooses not to do flows from a single motivation: Zuckerberg’s relentless desire for growth. Quiñonero’s AI expertise supercharged that growth. His team got pigeonholed into targeting AI bias, as I learned in my reporting, because preventing such bias helps the company avoid proposed regulation that might, if passed, hamper that growth. Facebook leadership has also repeatedly weakened or halted many initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.
In other words, the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization. And it’s all of us who pay the price.
It seems Facebook is unwilling to sacrifice growth and engagement for the good of society and democracy. In fact, they have suggested that tamping down on misinformation is unfair because it blocks conservative posts more than liberal ones.
When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.
But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.
This happened countless other times—and not just for content moderation. In 2020, the Washington Post reported that Kaplan’s team had undermined efforts to mitigate election interference and polarization within Facebook, saying they could contribute to anti-conservative bias. In 2018, it used the same argument to shelve a project to edit Facebook’s recommendation models even though researchers believed it would reduce divisiveness on the platform, according to the Wall Street Journal. His claims about political bias also weakened a proposal to edit the ranking models for the news feed that Facebook’s data scientists believed would strengthen the platform against the manipulation tactics Russia had used during the 2016 US election.
It would be nice if Facebook stopped spreading divisive and hateful content. But if doing so is antithetical to their business model, why should we expect them to?
I remain disengaged partly as a form of protest, but I’m not so virtuous. I mainly just think Facebook is boring.
Blending Waste into Art
Mariah Reading is an artist and park ranger who has issues with waste. Over several years, she has transformed found items into works of art that blend into our national parks.
(via Kottke)
The Mastery of Bach
Bach’s Prelude to his Cello Suite No. 1 in G Major is one of the most famous and recognizable pieces of classical music ever written. As a true masterpiece, it has inspired and humbled cellists for generations. What makes it so special? Vox explores.
Tweet of the Week
Sometimes people are too nice to offer honest feedback. If you want to know whether something you produced is any good, they may sugarcoat what they really think.
What to do? Adam Grant offers a solution.
That’s it for this week folks. Have a great week, stay safe, and remember to be awesome!
Thomas