Google announced on Wednesday that it will mandate the disclosure of alterations or creations made with artificial intelligence (AI) tools in political advertisements on its platforms. This policy change is set to take effect in November, coming into play about a year before what is expected to be a contentious US presidential election. This move is in response to increasing concerns that generative AI could potentially be exploited to misinform voters.
A Google spokesperson responded to an AFP query, stating, "For years, we've been committed to providing transparency for election ads. With the growing prevalence of tools producing synthetic content, we are taking our policies a step further by requiring advertisers to disclose any digitally altered or AI-generated material in their election ads."
In June, an AFP Fact Check team confirmed that a campaign video for Ron DeSantis, targeting former US President Donald Trump, featured images that seemed to have been created using AI. The video, shared on X (formerly known as Twitter), contained photos that appeared manipulated to show Trump embracing Anthony Fauci, a prominent member of the US coronavirus task force, with kisses on the cheek. Google's ad policies have already prohibited the manipulation of digital media to deceive or mislead people on political, social, or public interest matters.
Google also prohibits demonstrably false claims that could undermine trust or participation in the election process. The company requires political ads to disclose their funding sources and makes information about the advertisements available in an online ad library.
Under the forthcoming update, election-related ads will need to "clearly and prominently disclose" the presence of "synthetic content" depicting real or realistic-looking individuals or events. Google continues to invest in technology for detecting and removing such content.
Disclosures regarding digitally altered content in election ads must be "easily noticeable and conspicuous." Examples warranting such disclosure labels include synthetic imagery or audio portraying individuals saying or doing things they did not do, or depicting events that did not occur. Suggested labels by Google include, "This image does not depict real events" or "This video content was synthetically generated."