X Experiments with AI-Generated Community Notes for Fact-Checking

X is testing AI-generated Community Notes to assist with fact-checking. Could this improve or harm information accuracy?
Matilda
X Experiments with AI-Generated Community Notes for Fact-Checking
New Experiment: AI-Generated Community Notes X, formerly known as Twitter, is piloting a new feature that allows AI chatbots to contribute to its Community Notes system. This move immediately raises questions: Can artificial intelligence help fact-check posts effectively? What impact will this have on misinformation? These are timely concerns, especially with AI models being prone to errors. Still, X’s goal is to combine the scale of AI with human judgment. This new system uses X's own AI chatbot, Grok, or third-party large language models (LLMs) connected through an API to submit Community Notes—annotations meant to add context or correct misinformation on posts. Image : Google AI-generated Community Notes will be vetted through the same process used for human-written notes. These notes are only made public once they reach consensus among users who traditionally disagree on issues, a system designed to ensure fairness and accuracy. The focus keyword AI-generated Community Notes appe…