5 MINUTE READ | March 16, 2017
Being Brand Safe In the World of Fake News
The intense controversy following the November 2016 presidential election is far from over, especially for brands wanting to steer clear of leaning right or left. Digital advertising campaigns have caused brands trouble by unintentionally ending up on fake news sites and extreme political sites filled with hate speech, such as Breitbart, a news outlet known for their far-right commentary and opinion pieces.
The nature of digital advertising has allowed for brands to serve ads on sites that support certain political ideologies unintentionally. This has not proven to be a challenge in the traditional advertising space, however, because advertisers and their agencies seek out specific places they want to run and buy media directly through that publisher, channel or location. The focus is on targeting that channel.
What makes digital advertising different, and riskier for brands to run on sites that do not align with their brand message, is that campaigns focus on targeting a specific type of consumer and their online behavior, rather than just contextually relevant sites hoping to reach the correct consumer. While impression volume and spend amounts on these controversial sites may not be high for any brand, once an impression is served to a consumer who reports it, the negative attention begins.
Sleeping Giants, an activist Twitter account, is responsible for bringing this ad serving issue to light. The Twitter account sought to dismantle Breitbart’s business model by pressuring advertisers running ads on Breitbart articles to drop support for the extreme news site. Their tactic? Publicly mention the brand on Twitter accompanied by a screenshot of the ad and a caption informing the brand of the type of sites they were running on. With the threat of a PR headache for aligning with such an extreme political ideology very apparent, many brands sought refuge by asking their agencies to block the site.
Brands such as Kellogg’s and Warby Parker, among others, announced the decision to pull their advertisements off Breitbart.
Adding Breitbart, among other fake news sites, to blacklists is not as simple of a task as it sounds. Unlike blocking mature content, negative keywords and porn content, blocking fake news sites add an element of subjectivity. Who is to say what is “safe” and what is not? There is a fine line between limiting free speech and protecting a brand’s identity.
Within DSPs, third-party vendors such as Google and Peer 39 have brand safety categories available for targeting, to account for the chance ads could run on extreme and fake news sites. While Google and Peer 39 each have their own definitions of brand safety, so do all SSPs, exchanges, and ad servers. For example, Google has added a “Sensitive Social Issues” category, which is defined as, “Issues that evoke strong, opposing views and spark debate. These include issues that are controversial in most countries and markets (such as abortion), as well as those that are controversial in specific countries and markets (such as immigration reform in the United States).”
Similarly, Peer 39 defines “Safe from All Negative Content” as “This category bundles all Safety categories together to exclude all Negative content at once. This does not include Safe from Negative Industry categories, but it does protect from alcohol, drugs, firearms, gambling, mature, profanity, tobacco, torrent, and negative news.”
Because the nature of blocking ads on these types of sites is so subjective, the challenge lies in continuing to define what constitutes “unsafe” and how sites rate “bad enough” to be included in a blacklist.
Brands working with agencies should take the first step of asking their partners to create a blacklist that could be applied across all campaigns to block these types of sites from serving an ad. (At PMG, we developed a universal blacklist applied to all accounts.)
Marketers should regularly review the site list reports from their campaigns with an extra level of granularity. Look at each site your ads ran on and manually verify they passed established standards. Taking this a step further, brands and their agencies should actively research known fake news and controversial sites for compliance monitoring.
We also recommend applying third party contextual segments to block all controversial, violent, mature, political, and malicious activity sites, while custom building a brand safety segment that encompasses all content deemed unsafe to be applied to all campaigns.
Sign up for weekly articles & resources.
Additionally, invest in wrapping tags with a viewability partner such as DoubleVerify or Integral Ad Science to further authenticate where your digital ads were running either pre- or post-bid. Finally, to be extra cautious, one best practice that PMG follows is to exclude all non-transparent links. While this limits scale in some cases and increases CPMs, in the current state of society, all measures must be taken to ensure our campaigns are bullet proof from facing any further and undue scrutiny.
Posted by Ashley McMahan
2 MINUTES READ | April 22, 2022
2 MINUTES READ | January 24, 2022
2 MINUTES READ | January 20, 2022
1 MINUTE READ | January 12, 2022
1 MINUTE READ | December 16, 2021
3 MINUTES READ | December 14, 2021
2 MINUTES READ | December 8, 2021
5 MINUTES READ | December 3, 2021
1 MINUTE READ | December 2, 2021
1 MINUTE READ | November 3, 2021
4 MINUTES READ | November 2, 2021
3 MINUTES READ | October 21, 2021