Tools like ChatGPT could complicate marketers’ ability to scrutinize online content. Some firms say the fix could be yet more AI.
Generative artificial intelligence has captivated marketers’ attention by promising to help them conduct research and produce campaigns more efficiently. But it is also poised to further complicate some of their most thankless tasks, such as ensuring that their ads run only near content they want and that their digital marketing materials stay on-brand.
Generative AI tools such as ChatGPT, for example, have the ability to exponentially increase productionof the kind of middling and low-quality “clickbait” content that constitutes “made-for-advertising” websites. Such properties are designed to lure automated buying systems into placing ads there even though human brand managers would rarely make the same choice.
The solution to such challenges, perhaps counterintuitively, could be more AI. A new wave of products and services is simultaneously claiming that AI can help marketers’ efforts to screen digital media, potentially setting up an arms race over the broad area called brand safety.
The prospect of companies developing AI tools to counter the effects of other AI tools calls to mind the classic Mad magazine comic strip “Spy vs. Spy,” in which two nearly identical secret agents engage in an endless battle against each other, said Rob Rakowitz, co-founder of the Global Alliance for Responsible Media.
GARM is an initiative from the trade group World Federation of Advertisers to develop standards regarding the monetization of digital content and how publishers and platforms define hate speech, misinformation and other types of content that advertisers want to avoid.
The number of AI-powered brand-safety tools and startups is likely to grow in the coming months, according to Rakowitz. “There’s a lot of money being thrown around right now,” he said, referring to investors’ hunger for all things AI.
Automated oversight
Fifteen percent of all automated, or programmatic, ad buys go to made-for-advertising sites, according to a study conducted between September 2022 and January 2023 by the Association of National Advertisers, a U.S. trade group. At least $13 billion of the $88 billion spent on programmatic ads around the world annually is wasted on such sites, according to the ANA.
Marketers hope that new and future AI-fueled tools can help bring those figures down. In the meantime, brand-safety firms are applying AI in new ways to analyze digital data more quickly and accurately—a key step in fighting the problem.
Tech firm Pixability’s core product uses machine-learning AI models to review and classify millions of videos posted to YouTube and streaming TV platforms, thereby helping ad buyers determine which pieces of content are appropriate for their brands. Late last year, the company began using ChatGPT to strengthen these underlying algorithms by automating the process of asking—and answering—key yes-or-no questions about individual videos: Do they focus on certain sports? Do they feature adult themes? Do they include depictions of violence?
“What we find is that ChatGPT is as accurate, and in a lot of cases more accurate, than human categorization,” said Jackie Swansburg Paulino, Pixability’s chief product officer.
In one such case, ChatGPT accurately labeled a YouTube video from gun maker Beretta as being related to firearms and ammunition, said Swansburg Paulino.Pixability’s human review team had not placed the content in that category due to their unfamiliarity with the Beretta brand and the fact that the video, which features headsets and eyewear, includes no images of firearms.
A human must still view each of the videos that are used to train Pixability’s machine-learning model. But the use of ChatGPT has helped the firm review content far more quickly, said Swansburg Paulino,because it can ask more questions, and feed more answers into the underlying system, than any human team ever could. Eventually, this process could match, or at least approach, the pace at which videos are uploaded to platforms like YouTube every day, she said.
Brand discipline
There are other ways that AI is being called upon to help police its use in marketing and advertising.
As marketers became more interested in using AI to generate marketing materials, an opportunity developed to help make sure the machines truly hew to brands’ own guidelines, said Rob May, chief executive at Brandguard.ai, formerly known as Nova.ai.
Brandguard.ai’s core product originally created and tested hundreds of slight variations on social media ads. The company now uses machine learning to assess whether marketing materials created by generative AI, such as website landing pages, fit with a brand’s voice, according to May. It can also identify uses of trademarked content or messages that too closely resemble those of competing brands, May said.
Marketers “don’t want anything slipping out that’s not on-brand. That can happen very easily with generative AI,” he said.
The coming months and years will see more products that use AI to address brand-safety and messaging concerns amid an increase in both the volume of AI-generated marketing content and the proportion of ads purchased programmatically, said Rory Latham, senior director for global investment of programmatic at agency network GroupM.
“Our clients are very interested in what’s out there and what they could be testing,” Latham said. “It’s just not scalable to verify everything with humans instantly.”
Even as generative AI allows marketers to automate more of this sort of work, it will also increase the need for human oversight of the endless data stores that power its models, said Mark Zagorski, CEO of ad-verification firm DoubleVerify.
“AI just eats everything,” he said. “It needs to be fed.”
—
This article first appeared on www.wsj.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +971 50 6254340 or engage@groupisd.com or visit www.groupisd.com/story