FeaturedTechnology

AI Media Adding to Public Trust Issues

Media generated by artificial intelligence, such as pictures and videos, have been adding to public trust issues. The increasing quality of such videos is made possible by advances in technology involving machine learning models. Algorithms such as Generative Adversarial Network (GAN) or Variational Auto-Encoder (VAE) are often used to generate the unique and realistic AI generated pictures we see on the Internet. As the technology further advances, discerning between authentic and generated imagery will likely be nearly impossible without some kind of accountability identifier.

Efforts by social media platforms and news media companies are partially helping in the near term. Facebook and X are placing labels on the posts containing images that they find to be artificially generated rather than authentically acquired by being at the location. Recent common examples include war time scenes with missiles flying through the air that falsely depict attacks between Israel and Hamas. Fact checkers did not catch the situation until after millions of people had already viewed the post. Both Facebook and X then placed a label on the post that identified the video as a video game animation rather than actual war footage.

According to a July 2023 study, “experts agree that social media platforms have worsened the problem of misinformation and were in favor of various actions that platforms could take against misinformation, such as platform design changes, algorithmic changes, content moderation, de-platforming prominent actors that spread misinformation, and crowdsourcing misinformation detection or removing it.”

Stock image outlets, such as Adobe, offer AI generated graphics for sale. When searching for keywords including Israel, Hamas, and war the results return a plethora of images depicting damaged buildings, explosions, weapons, and victims in the aftermath. While some of the image information does mention AI, the realism can be easily shared as misinformation, which also actively undermines the trust we have in the news about world events we read on a daily basis. The featured image above is one example of an AI generated image showing an explosion from a missile in war.

“People trust what they see,” said Wael Abd-Almageed, a professor at the University of Southern California. He goes on to say that “once the line between truth and fake is eroded, everything will become fake. People will not be able to believe anything.” The problem has the potential to become worse as AI-generated content becomes more realistic and more widespread unless actions are taken to counter the undermining trust.

The study mentions some system-level actions against misinformation. The most widely agreed-upon solution was platform design changes, algorithmic changes, and content moderation on social media. Experts were also in favor of removing prominent actors sharing misinformation from the platform, stronger regulations to hold platforms accountable, crowdsourcing the detection of misinformation, removing misinformation, penalizing misinformation sharing on social media. Shadow banning, or stealthily blocking a user from areas of a platform, was by far the least popular action against misinformation.

The call for more accountable information online is stronger than ever and growing as the technology advances. Perhaps we can also turn to AI to identify such unauthentic media. Blockchain technology registering authentic graphics might serve as a public database to record and verify the authenticity. Organizations that are publicly forthright with their information and verified with certifications would also help the public have a better feel on who to trust in the jungle of artificial information online.