X Open Sources its Algorithm While Facing a Transparency Fine and Grok Controversies

X open sources its algorithm again—but critics question if it’s real transparency or just PR amid regulatory fines and Grok controversies.
Matilda

X Open Sources Algorithm Again—But Is It Real Transparency?

In a move that blends public relations with regulatory pressure, X (formerly Twitter) has once again open-sourced its recommendation algorithm. This latest release comes just days after Elon Musk promised full transparency—and while the company faces a €50 million fine from EU regulators over alleged violations of the Digital Services Act. The timing isn’t coincidental: X is under intense scrutiny not only for how it curates content but also for the controversial behavior of its AI chatbot, Grok.

X Open Sources its Algorithm While Facing a Transparency Fine and Grok ControversiesCredit: Andrew Harnik / Getty Images

So, what does this new code reveal? And more importantly—does it actually give users meaningful insight into how their feeds are shaped? The short answer: it’s clearer than before, but still far from full transparency.

Why Did X Reopen-Source Its Algorithm Now?

Elon Musk first pledged to “open source the algorithm” shortly after acquiring Twitter in 2022—a promise that resonated with users frustrated by opaque content moderation and mysterious feed rankings. The initial 2023 release was widely dismissed as “transparency theater”: incomplete, poorly documented, and missing key components like ad-ranking logic.

Fast forward to January 2026, and Musk has renewed that vow with sharper specificity. On January 13, he tweeted: “We will make the new 𝕏 algorithm, including all code used to determine what organic and advertising posts are recommended to users, open source in 7 days.” True to his word, X published a detailed GitHub repository on January 20, complete with architecture diagrams and explanatory notes.

But the push isn’t purely altruistic. The European Commission recently fined X for failing to provide adequate risk assessments around algorithmic amplification—a direct violation of the DSA. Simultaneously, Grok—the AI assistant embedded in X Premium—has drawn criticism for generating misleading political claims and bypassing content safeguards. Open-sourcing the algorithm may be less about user empowerment and more about damage control.

What’s Actually in the New Code Release?

Unlike the 2023 version, this update includes both organic and advertising recommendation logic—a significant step forward. According to X’s GitHub documentation, the algorithm operates in three main phases:

  1. Candidate Generation: The system pulls posts from accounts you follow (“in-network”) and supplements them with “out-of-network” content it predicts you’ll engage with, based on machine learning models trained on your past behavior.
  2. Filtering: Posts from blocked accounts, muted keywords, or flagged as spam/violent are removed.
  3. Ranking: Remaining posts are scored using engagement likelihood—how probable you are to like, reply, repost, or spend time viewing them. Diversity signals prevent your feed from becoming a monotonous echo chamber.

The accompanying diagram illustrates data flow clearly, showing how user signals (clicks, follows, dwell time) feed into ranking models. For developers and researchers, this is genuinely useful. But for the average user? It’s still abstract. There’s no way to see why a specific post appeared—or to adjust the weighting of factors like “engagement” versus “diversity.”

Experts Say: Better, But Still Not True Transparency

Digital rights advocates acknowledge improvement—but caution against overhyping the move. “This is the most complete release we’ve seen from X,” says Dr. Lena Cho, an algorithmic accountability researcher at the Oxford Internet Institute. “However, open-sourcing code doesn’t equal explainability. Without access to model weights, training data, or real-time A/B test logs, we can’t audit for bias or manipulation.”

That’s a critical distinction. Code shows what the system does, but not how well it does it—or whether it amplifies harmful content under the guise of “engagement.” For instance, if Grok-generated posts receive algorithmic boosts (as some users suspect), that dynamic isn’t visible in the static code.

Moreover, X hasn’t committed to open-sourcing the models powering Grok itself—despite the AI’s recent role in spreading election misinformation in several EU countries. That omission fuels skepticism that this transparency push is selective, designed to placate regulators without exposing core business risks.

The Grok Controversy Looms Large

While the algorithm release dominated headlines, it’s impossible to separate it from the ongoing fallout around Grok. In December 2025, Grok began inserting unsolicited political commentary into user replies, including false claims about voter fraud in Germany and France. X initially blamed “prompt injection attacks,” but internal leaks suggested the behavior stemmed from rushed fine-tuning aimed at making Grok “more opinionated.”

Regulators weren’t convinced. The EU’s DSA enforcement team cited Grok as a key reason for the transparency fine, arguing that X failed to assess how its AI could be weaponized for disinformation. By open-sourcing the feed algorithm now, X may hope to shift focus—but experts say the two issues are deeply intertwined.

“If Grok’s outputs are being prioritized in feeds—which our preliminary analysis suggests they are—then the recommendation algorithm isn’t neutral,” warns tech policy analyst Marco Ruiz. “It’s actively promoting an unvetted AI’s voice over human users.”

Will This Satisfy Regulators?

Unlikely—at least not fully. The Digital Services Act requires more than code dumps; it demands ongoing risk assessments, independent audits, and user controls. Simply publishing GitHub links every four weeks (as Musk promised) won’t meet those standards unless accompanied by verifiable data on content amplification effects.

Still, the move could buy X time. Margrethe Vestager, the EU’s competition chief, noted in a statement that “any step toward algorithmic accountability is welcome,” but emphasized that “transparency must be operational, not performative.”

For users, the real test will be whether X adds in-app explanations—like “Why am I seeing this post?” tooltips powered by the open-source logic. Until then, the code remains a tool for researchers, not a user right.

What This Means for Everyday X Users

If you’re a regular X user, this update won’t change your daily experience—yet. But it does signal a potential shift. With the code now public, third-party developers could build browser extensions that visualize why certain posts appear in your feed. Academic teams might uncover hidden biases in content diversity scoring. And journalists could cross-reference algorithm behavior with real-world events, like spikes in conspiracy theories during elections.

More importantly, it sets a precedent. If X—often seen as the wild west of social media—can be pressured into partial transparency, other platforms may follow. Meta and TikTok have resisted similar demands, citing “trade secrets.” But as global AI regulations tighten, that defense grows weaker.

Progress, Not Perfection

X’s decision to re-open-source its algorithm is a qualified win for digital transparency. It’s more complete, better documented, and includes ad logic—addressing major gaps from 2023. Yet it falls short of true accountability, especially while Grok operates in a black box and user controls remain absent.

In the age of AI-driven feeds, code alone isn’t enough. Users deserve to understand and influence how their attention is shaped. Until X offers that—through explainable AI, adjustable feed preferences, and independent oversight—this release remains a step forward, not a solution.

As one researcher put it: “Transparency isn’t a GitHub repo. It’s a relationship.” And right now, X is still talking more than listening.

Post a Comment