top of page

How a Fake AI Photo Cost the Market $100 Billion: The Dangers of Deepfakes

Fake pentagon explosion photo
Photo Source:

On Monday, May 22, 2023, a shocking image of an explosion near the Pentagon went viral on social media platforms, causing panic and confusion among millions of people.

The image showed a large plume of smoke rising from the vicinity of the U.S. military headquarters in Virginia, suggesting a possible terrorist attack or a major accident.

However, the image was soon revealed to be a hoax, created by artificial intelligence (AI) tools that can generate realistic-looking images from scratch. The U.S. Department of Defense and the Arlington Police Department confirmed that there were no reported incidents at or near the Pentagon and that the image and the accompanying reports were fake.

The fake photo was first posted on Facebook by a user who claimed to be near the scene. It quickly spread on Twitter, where it was shared by verified accounts with millions of followers, including the Russian state-controlled news network RT and the financial news site ZeroHedge.

The image also briefly spooked the stock market, as the Standard & Poor’s 500 declined about 0.3% to a session low when the photo was circulating. This translates to a loss of about $100 billion in market value.

The incident highlights the dangers of deepfakes, which are AI-generated images, videos or audio that can manipulate reality and deceive people. Deepfakes can be used for various malicious purposes, such as spreading misinformation, impersonating celebrities or public figures, blackmailing or extorting people, or influencing elections or public opinion.

Deepfakes are becoming more sophisticated and accessible, thanks to advances in AI and machine learning. Anyone with a computer and an internet connection can create and share deepfakes online, without much technical skill or oversight.

This poses a serious threat to the trustworthiness of information and the credibility of sources on the internet.

How can we protect ourselves from deepfakes? There are some ways to detect and counter deepfakes, such as:

  • Checking the source and the context of the image or video. Is it from a reputable or verified account? Does it match with other sources or reports?

  • Looking for signs of manipulation or distortion in the image or video. Are there any inconsistencies or anomalies in the lighting, shadows, colors, edges, facial expressions or movements?

  • Using tools or platforms that can verify or debunk deepfakes. For example, some websites or apps can analyze images or videos and flag them as real or fake.

  • Educating ourselves and others about deepfakes and how they work. We can learn more about AI and how it can create and manipulate images or videos, and how we can spot them.

  • Reporting or flagging deepfakes if we encounter them online. We can help stop the spread of misinformation and raise awareness about the issue.

Deepfakes are a serious challenge for our society and our democracy. They can undermine our trust in information and our ability to discern fact from fiction. We need to be vigilant and critical when we consume online content, and we need to support efforts to combat deepfakes and promote digital literacy.

Learn More:

Fake AI-generated image of explosion near Pentagon spreads on social media | Artificial intelligence (AI) | The Guardian

Fake Pentagon “explosion” photo sows confusion on Twitter | Ars Technica

Fake Pentagon explosion photo goes viral: How to spot an AI image | Science and Technology News | Al Jazeera

bottom of page