In 2023, the website then known as Twitter partially open sourced its algorithm for the first time. In those days, Tesla billionaire Elon Musk had only recently acquired the platform, and he claimed to be on a mission to restructure the social media platform to make it more transparent.
However, the algorithm’s code release was swiftly critiqued for being “transparency theater,” with critics noting that it was “incomplete,” and that it didn’t reveal much about the inner workings of the organization, or why the code worked the way that it did.
Now the site (rebranded as X) has open sourced its algorithm again, fulfilling a promise made by Musk last week. “We will make the new 𝕏 algorithm, including all code used to determine what organic and advertising posts are recommended to users, open source in 7 days,” he’d said. Musk also promised to provide transparency into the algorithm every four weeks for the foreseeable future.
In a post on GitHub on Tuesday, X provided an accessible write-up about its feed-generating code, along with a diagram of how the program works.
What has been revealed isn’t particularly earth-shattering—but it does provide a peek behind the algorithmic curtain. The diagram shows that, when sifting about for content to feed a particular user, the site’s algorithm considers their engagement history (what posts they’ve clicked on, etc.) and surveys recent in-network posts. It also conducts a machine-learning-based analysis of “out-of-network” posts — as in, content from accounts that the user doesn’t necessarily follow — that it believes the user might also find appealing.

The algorithm then filters out certain kinds of posts, including ones that come from blocked accounts or that are associated with muted keywords, as well as content that has been deemed too violent or spam-like. The algorithm then ranks this content based on what it thinks the user will find most appealing. This process considers factors like relevance and content diversity so users don’t just get a bunch of posts that are all alike. The algorithm also considers content according to the likelihood that the user will like it, reply to it, repost it, favorite it, or otherwise engage with it in some way.

This whole system is AI-based, according to X. The GitHub write-up released Tuesday notes that the system “relies entirely” on the company’s “Grok-based transformer” to “learn relevance from user engagement sequences.” In other words, Grok is looking at what you’re clicking and liking and feeding that information into the recommendation system. The write-up also notes that there is no “manual feature engineering for content relevance,” meaning humans don’t manually adjust how the algorithm determines what’s relevant. It adds that the automation “significantly reduces the complexity in our data pipelines and serving infrastructure.”
Techcrunch event
San Francisco
|
October 13-15, 2026
Why is X revealing all of this now? It’s not totally clear. In the past, Musk has claimed that he wants to make the platform an exemplar of corporate transparency—a theme that continues to today. In 2023, when the Twitter algorithm was first revealed, Musk said that providing “code transparency” would be “incredibly embarrassing at first” but would ultimately “lead to rapid improvement in recommendation quality.” He added: “Most importantly, we hope to earn your trust.” With its first code open-sourcing, the platform proclaimed a “new era of transparency” for Twitter.
Though Musk has talked transparency, certain aspects of the platform have arguably grown less open since Musk took it over. When the tech billionaire bought Twitter in 2022, the site was notably forced to transition from a public company to a private one—an evolution that isn’t typically synonymous with openness. While the site used to release multiple transparency reports a year, X didn’t release its first transparency report until September of 2024. In December, X was also fined $140 million by European Union regulators who claimed that the site had violated “transparency obligations” under the Digital Services Act (DSA), and argued that the site’s verification check mark system had made it more difficult for users to judge the authenticity of particular accounts.
X has also been under pressure over the past month due to the ways in which its chatbot, Grok, have been used to create and distribute sexualized content. The California Attorney General’s office and congressional lawmakers have both scrutinized the platform in recent weeks, citing claims that Grok has been used to create naked images of women and minors. As a result, some may view this appeal to openness as just more theater.