Over the past year, I’ve been engaged in a scattered, complex, deep investigation of political disinformation and the algorithms that help it thrive. What started as a machine learning project to analyze the language used in tweets turned into a research exercise spanning across the world, examining political polarization, and learning about federal legislation from over one hundred years ago.
My initial goal for this research was to collect and analyze data, and come away with some sense of what the driving forces of disinformation are in order to better understand how we can address it. I did a number of small-scale investigations: in this example, I tracked the accounts with the top performing posts on Facebook over the course of eight months. The results show that post engagement, at least on Facebook, happens much more on the far right than on the left. And we know that inflammatory posts on the far right can lead to misinformation about elections—and, ultimately, to insurrection.
There are so many different ways to tackle this issue, and so many different entities working on doing so. One of the most direct paths is through regulation and policy. Following the insurrection, social media companies took action by deplatforming Donald Trump and other far right accounts; as a result, misinformation plummeted. But that's not enough; we need to be able to regulate the technology that helped these lies to thrive.
To address this, I wrote a policy memorandum recommending a more holistic federal approach to the issue. I also created a website explaining how algorithms and ads help perpetuate misinformation and imagining what a way forward might look like. You can learn more at battleforyourbrain.com.