There is worry about an increase of deceptive generative AI-based political ads, triggering more confusion as the political season heats up. That in turn, will make lies easier to sell.
The rising temperature of political ads between former President Donald Trump and Florida Gov. Ron DeSantis for the 2024 GOP nomination makes this real November 2019 photo look out of place today. Could it be a generative AI image? No. This is a real picture.
The FEC voted recently to open public comment on how to regulate generative AI political ads. But news organizations need to learn about reporting on these ads. This is their opportunity.
A DeSantis PAC’s attack ad on frontrunner Donald Trump is a recent example of this. In it, the ad shows the former president criticizing Iowa’s Republican Governor Kim Reynolds, except Trump’s voice was generated, but Trump had not gone on a speech and said those words. The PAC supporting DeSantis took the text from Trump’s text posts and generated audio in his voice. In July, WESH 2 News (a TV outlet in Central Florida) did an effective debunk.
This is a great start, and news outlets can do more. The key question is: How does generative AI change the already vexing issue of political ads, deception, and lies? Is this just more of the same, just produced differently and more sophisticatedly, or are there fundamentally new questions altogether along with responsibilities for news organizations to think about?
The power distinctive to generative AI used in political ads is this. It destroys the authenticity we currently associate only with the whole person. Any element of a person’s likeness can now be separately generated by AI, especially for public figures. It could have generated voice or image or video, independent of generated text or language and combinations of the above.
Journalists and editors, now, need to be thinking about the authenticity of likeness in the era of generative AI.
Generative AI can disassemble your likeness as a whole person – voice, visual, presence, gestures, eye movements, smile and whatnot. Given that, different authentic elements of any public figure can be synthesized and recombined arbitrarily.
Until recently, the focus has been on the content of the claims in an ad. There are also standardization efforts underway to help encode the origin and authenticity of a branded video and prevent tampering. But, that the authenticity of your likeness can itself be disassembled and reassembled for the express purpose of propaganda and deception in a political ad is the new issue generative AI capacities bring.
What should newsrooms do?
I convene the Markkula Center’s Journalism and Media Ethics Council. Several members of the council addressed this topic during one of our meetings. A takeaway of mine: ask the key content-related questions – i.e., vet the claims. But also dig up and expose the digital mechanics in the ad.
Questions reporters could ask as they consider reporting out the ad:
- Who released the ad? Is the ad about a candidate the ad’s sponsor supports, or does it contain messages against or attacking an opponent? If the ad is from the candidate’s side and contains falsehoods or lies about the candidate itself, the FEC can look into it. If the ad implicates an opponent, and lies or falsehoods about them, the FEC has no leeway.
- Is it satire? For instance, The Daily Show did a satirical generative AI video on Biden, which is easy to tell because of their branding on the visual. The Lincoln Project added a laugh track to a video President Trump posted about the indictments and the Jan 6th committee.
- If it’s not satire, the review demands go higher. Identify where generative AI was used. Video, audio, text, headline, mix of all of these? Did the producer disclose that?
- Dig around and ascertain whether the public figure in the ad actually said those things somewhere in speech. Details matter.
- Even if they said it, that does not make the claim itself true or false automatically. A review of the claim or claims (also called fact-checking) is going to be needed. Or if it was already fact-checked, those results – such as true, false, partially true, etc., would already be available from major fact-checking organizations or peer news sites.
- Is the ad simply using generative AI to create the likeness of a person to relay an otherwise true or factual message? The newsroom needs to know if the producers are simply trying to cut costs by generating the candidate’s likeness instead of recording a conventional new video with them. That leads to the question of whether the candidate authorized the likeness.
- Likewise, is the ad simply using generative AI to create the likeness of a person to relay a rhetorical claim, and hence is not fact-checkable because it is too general? Such claims are protected political speech anyway. For example, Donald Trump may have never actually said “Biden miserably messed up in handling Maui”, but a generative AI ad could simply use Trump’s likeness to show him saying that. In which case did he authorize that rendition?
As you can tell, depending on the answers to these questions, the reporters and editors would be better positioned to decide whether and how they want to report the claims.
What is the real opportunity?
There is a real opportunity to go beyond the content of the ads. News outlets could use generative AI ads to educate the public how about generative AI works and how it comes into play.
Deepen labeling
When WESH 2 reported the DeSantis attack ad, it labeled the video “Generative AI Political AD” even as the reporter was debunking the mechanics. Additional labeling would not hurt at all. A news report on the ad could specifically add a second line to the running label:
“The advertisement contains generated audio, using text posted by the person X or Y on platform Z. Person X did not say this.”
This gives a clear idea to the viewer – whether or not the viewer is listening to the reporter – and goes beyond the general umbrella label “Generative AI Political AD.”
Going further, generative AI ads are input for great explainers:
Beyond quick debunks, journalists could write explainer articles on generative AI ads to let the public see how the sausage got made.
Use the same seven questions above as an explainer checklist. Help people see how AI works in ads, distinct from other forms of advertising creativity. Demystify the concepts of training data, and language models. Explode myths about what the term artificial intelligence really amounts to. Machines do not understand meaning as humans do.
This will help with generative AI literacy at a time when this is much needed as the 2024 elections cycle goes into full gear. The more background knowledge we have as political ad viewers, the more likely the ad may trigger our curiosity about generative AI being used in it. In turn, we may look for explanations, and this will slow us down before we get pulled into believing the claims or quickly share the ad online. This rapid uncritical behavior is what deceptive and manipulative campaigns want.
There are no magic bullets, no one-shoe-fits-all answers. Debunk, explain, educate.
Follow me on Twitter or LinkedIn.