How Verified Accounts Helped Fake Pentagon Explosion Footage Go Viral

Verified accounts on Twitter may have contributed to the viral spread of a false claim that an explosion was taking place at the Pentagon.
At around 8:42 a.m. Monday, a verified Twitter account posing as a media and news agency shared a fake image of smoke billowing near a white building they claimed was the Pentagon. The tweet caption also misrepresented the location of the Pentagon.
No such incident took place, Arlington County Fire Department later said on Twitter. The Pentagon, headquarters of the United States Department of Defense, is located in Arlington County, Virginia.
A Pentagon spokesperson also told ABC News that no explosions occurred.
But throughout the morning, the fake image and misleading caption gained momentum on Twitter. Cyabra, a social analytics firm, analyzed the online conversation and found that around 3,785 accounts had mentioned the lies, dozens of which were verified.
“The check mark may well have helped give the account an appearance of authenticity, which would have helped it achieve virality,” Jules Gross, solutions engineer at Cyabra, told ABC News.
Some of these accounts were verified, but they did not appear to be coordinated, according to Cyabra.
A false image spread on social networks on Monday morning.
ABC News
“The bad news is that it looks like just one account was able to achieve virality and cause maximum chaos,” Gross added.
While ABC News was unable to determine the source of the content, or confirm that the original tweet was the 8:42 a.m. tweet, the image contains many characteristics of having been generated using a text-to-image AI tool.
There are numerous visual inconsistencies in the image, including a street lamp that appears to be both in front of and behind the metal barrier. Not to mention that the building itself does not look like the Pentagon.
AI-powered text-to-image tools allow users to enter a natural language description, called a prompt, to get an image back.
Over the past few months, these tools have become increasingly sophisticated and accessible, resulting in an explosion of hyper-realistic content misleading users online.
The original fake tweet was eventually taken down, but not before it was amplified by a number of accounts on Twitter bearing the blue check that was once reserved for verified accounts but can now be purchased by any user.
ABC News could not immediately reach a Twitter spokesperson for comment.
What are the solutions ?
“Today’s AI hoax from the Pentagon is a harbinger of what’s to come,” said Truepic CEO Jeff McGregor, who says his company’s technology can add a layer of transparency to content published online.
Truepic, a founding member of the Coalition for Content Provenance and Authenticity, has developed camera technology that captures, signs and seals critical details in every photo and video, such as time, date and location.

Last month, Truepic, Revelai and Nina Schick published the world’s first transparent deepfake signed by the C2PA open standard.
Truepic/Revel.ai
The company has also created tools that allow users to hover over AI-generated content to find out how it was made. In April, they released the first “transparent deepfake” to show how the technology works.
While some companies have embraced C2PA technology, it is now up to social media platforms to make this information available to their users.
“This is an open-source technology that allows anyone to attach metadata to their images to show that they created an image, when and where it was created, and what changes were made to it in the process. road,” Dana Roa, general counsel and chief trust officer at Adobe, told ABC News. “It allows people to prove what is real.”
Changes would be identified. For example, if an image was cropped or filtered, that information could be displayed, but the user could also select how much data they make available to the public.
The user would be able to select the amount of data they make available to the public.
State and local law enforcement received a written briefing Monday from the Institute for Strategic Dialogue, an organization dedicated to countering extremism, hate and disinformation, with details of the incident.
“Security and law enforcement officials are increasingly concerned about AI-generated information operations intended to undermine government credibility, stoke fear or even incite violence” said John Cohen, ABC News contributor and former acting undersecretary for intelligence.
“Digital content provenance will help mitigate these events by increasing the transparency and authenticity of visual content by empowering users and creators,” McGregor added.
ABC News