How AI-powered software spreads Russian disinformation on X

The US Justice Department (DoJ) has seized two US-based domains used by Russian threat actors to create fake profiles on X (formerly Twitter) that would spread disinformation in the United States and abroad.

This bot farm was created and operated via Meliorator, an AI-enhanced software package. Most of the created accounts were made to look like they belonged to individuals in the United States, and pushed out messages (with images and videos) in support of Russian government objectives.

Leveraging the bot farm

“Development of the social media bot farm was organized by an individual identified in Russia (Individual A). In early 2022, Individual A worked as the deputy editor-in-chief at RT, a state-run Russian news organization based in Moscow,” the DoJ claims.

“Since at least 2022, RT leadership sought the development of alternative means for distributing information beyond RT’s standard television news broadcasts. In response, Individual A led the development of software that was able to create and to operate a social media bot farm.”

Russian disinformation X

How Meliorator is used (Source: IC3)

US, Dutch and Canadian law enforcement and cybersecurity agencies have authored an advisory detailing Meliorator’s capabilities to create fake but authentic-appearing social media personas en masse and use them to create, spread and repeat disinformation.

“The identified bot personas associated with the Meliorator tool are capable of deploying content similar to typical social media users, such as generating original posts, following other users, ‘liking,’ commenting, reposting, and obtaining followers; mirroring disinformation of other bot personas through their messaging, replies, reposts, and biographies; perpetuating the use of pre-existing false narratives to amplify Russian disinformation; and formulating messaging, to include the topic and framing, based on the specific archetype of the bot,” they explained.

“The creators of the Meliorator tool considered a number of barriers to detection and attempted to mitigate those barriers by coding within the tool the ability to obfuscate their IP, bypass dual factor authentication, and change the user agent string. Operators avoid detection by using a backend code designed to auto-assign a proxy IP address to the AI generated persona based on their assumed location.”

The tool was developed by several other individuals, who also purchased infrastructure for the bot farm.

“In early 2023, with the approval and financial support of the Presidential Administration of Russia (aka the Kremlin), a Russian FSB officer (FSB Officer 1) created and led a private intelligence organization (P.I.O.), as explained in the affidavits. The P.I.O.’s membership was comprised of, among others, employees at RT, including Individual A. The true purpose of the P.I.O. was to advance the mission of the FSB and the Russian government, including by spreading disinformation through the social media accounts created by the bot farm. According to the affidavits, FSB Officer 1, Individual A, and other members of the PIO had access to the social media bot farm,” the DoJ says.

Disrupting the Russian disinformation operation

The domains seized by the FBI (“mlrtr.com” and “otanmail.com”) were used to set up private email servers. The servers were used to create email accounts that have been leveraged to create fictitious social media accounts via Meliorator.

The DoJ has searched nearly 1000 suspected X bot accounts, which have been consequently suspended by the social network for terms of service violations.

Meliorator works for creating fake personas on X, but “additional analysis suggests the software’s functionality would likely be expanded to other social media networks.”

In related news, Wired reporter David Gilbert explained today how fake news created by a one-man Russian AI-powered spam farm and delivered via bogus news websites topped Google Search results, and was amplified on X by a network of fake bot accounts.

Don't miss