X is tightening its rules on how creators use artificial intelligence, warning that those who post AI-generated videos of armed conflict without clear disclosure will lose access to the platform’s Creator Revenue Sharing Program.
Nikita Bier, X’s head of product, said creators who use AI tools to mislead viewers about warfare or on-the-ground events will be suspended from revenue sharing for 90 days. The penalty targets posts that present synthetic footage as real, without any indication that the material was created or altered by AI.
If a creator continues to share misleading AI war content after the initial suspension, X says they will be permanently removed from the revenue-sharing program. The company is not banning such posts outright, but it is cutting off the financial incentive for those who fail to label them.
Bier framed the move as a response to the growing risk of AI-driven disinformation during conflicts, noting that modern tools make it “trivial” to fabricate convincing battlefield scenes, explosions, or casualty footage. X argues that, in wartime, the stakes are especially high for users who rely on social platforms for real-time updates and eyewitness accounts.
To enforce the policy, X plans to use a mix of automated systems designed to detect generative AI artifacts and its crowdsourced fact-checking feature, Community Notes. Users who spot suspicious videos can flag them, and if a consensus emerges that the footage is AI-generated and undisclosed, the creator’s revenue status may be affected.
The Creator Revenue Sharing Program allows eligible users to earn a portion of advertising revenue from their posts, turning viral content into a potential income stream. Supporters say it rewards original voices and encourages more frequent posting. Critics counter that it can push creators toward sensationalism, outrage bait, and borderline misinformation in pursuit of higher engagement.
Analysts note that the new rule is narrowly focused on AI depictions of armed conflict. Other forms of AI-generated media, including political deepfakes or deceptive commercial endorsements, remain outside the scope of this enforcement, as long as they do not fall under separate platform rules. That leaves X attempting a partial fix in a rapidly evolving information landscape, where synthetic media is becoming harder to spot and easier to monetize.