Marketers should be transparent about the use of GenAI on social media.

Compared to global averages, Canadian consumers are deeply apprehensive about artificial intelligence's potential negative impacts, while Canadian marketers are bullish about AI-generated content. To promote the responsible use of AI, Parliament is developing its first federal bill (Bill C-27) on artificial intelligence, [1] while social media platforms have launched policies for labelling AI-generated content. Originally intended to target harmful political misinformation, such policies have already impacted marketers. 

Among marketers that use GenAI to create social media content, Canadians lead the world in terms of the volume of AI-generated content produced, but they have room for improvement when it comes to transparency. According to Capterra’s survey of over 1,600 global marketers who use GenAI to create social media content, including 108 marketers located in Canada, Canadians use GenAI to produce an average of 54% of their companies’ social media content. However, only 44% consistently label content as AI-generated on social media platforms.* This discrepancy creates risk for companies and consumers alike. 

Companies don’t need to wait for Bill C-27 to pass to improve transparency around GenAI usage. They can start implementing best practices into their marketing workflows to capture the efficiency of this new technology without stumbling into problems with social media platforms and their users.

Key insights
  • 44% of Canadian marketers using GenAI for social media content creation say their companies always label AI-generated content on social media.
  • 79% of Canadian marketers say mandatory labelling of AI-generated content on social media would positively impact their companies' social media performance. 
  • 80% of Canadian marketers (well above the global average of 67%) are moderately to highly concerned about the risk that their companies’ AI-generated marketing content could spread harmful misinformation. 
  • 73% of Canadian marketers say the use of AI-generated content has enhanced their companies’ performance on social media.

Canadian government and businesses are enthusiastic about GenAI; the public, less so 

Canadian marketers who currently use GenAI for content creation produce an average of 54% of their companies’ total social media marketing content using the technology—a figure well beyond the global average of 39%. By 2026, that figure is expected to grow to 65%, the highest projected GenAI usage across surveyed countries in Europe, North America, and Asia-Pacific.

For their part, the Canadian government hopes to stimulate the economy and ensure the safe and responsible use of AI by pursuing its first bill (Bill C-27) to regulate the technology. It has also pledged billions to support AI development, including the creation of an “AI safety institute.” [1] 

Despite federal enthusiasm for AI, marketers face an uphill battle in winning over a deeply distrustful Canadian public. According to a 2024 Microsoft study, only 31% of Canadians trust AI, a figure that has actually declined since 2019 and now lies well below the global average. [2] Among Canadians’ chief concerns are privacy, the impact of unempathetic AI-driven decision-making, and future skill regression caused by AI dependency. While younger generations tend to be more optimistic about emerging tech, the Canadian public is wary.

AI labels on social media could protect Canadians from misinformation

One way to assuage Canadians’ skepticism about whether AI will be used responsibly could be to label AI-generated content on social media. Recent research by the Massachusetts Institute of Technology (MIT) suggests that consumers are less trusting of political content bearing an AI label. [3] Following that logic, some experts believe AI labels could help prevent wildly inflated customer expectations about a product’s appearance or performance based on an exaggerated image, similar to how cosmetics companies add fine-print disclaimers to ads featuring retouched models. Buyer beware: In the age of GenAI, what you see is not necessarily what you get.

Top social media platforms have already rolled out AI labelling policies to counteract an influx of AI-generated disinformation. Meta’s Instagram and Facebook apply a label automatically if an AI footprint is detected in the metadata, while other platforms such as TikTok and YouTube require creators to self-disclose the use of GenAI in their content.

There’s one big problem, of course: Anyone using GenAI to intentionally mislead people is unlikely to self-disclose that they used the technology. And since we don’t yet have a fool-proof method of detecting AI-generated content, AI labelling policies are currently unenforceable. There is also no universally accepted AI-generated content label, which muddies the very definition of “AI-generated content.” What should an AI label look like? What level of GenAI involvement in a given piece of content warrants a label? How do we deal with noncompliance? Experts can’t decide. 

The label theory also takes for granted that consumers will appreciate the transparency more than they detest fakery. Labels could instead backfire, inspiring general suspicion about companies that ostensibly use AI “responsibly”—about their authenticity and level of care toward employees and customers, for instance. Consumers are primed to pounce on the misuse of AI; the high-profile example of AirCanada’s rogue GenAI chatbot is a perfect example of how one viral misstep can tank a company’s public standing. [4]

Graph showing how Canadian marketers label AI-generated content on social media inconsistently: 44% of marketers always label content; 39% of marketers sometimes label content; 15% of marketers never label content; 3% of marketers are not sure.

Thus, a Canadian marketer’s decision of whether or not to disclose their use of GenAI is fraught. Unsurprisingly, Canada’s AI labelling compliance rate is only 44%, and while it exceeds the global average (30%), it’s a far cry from perfect.

Even companies that don’t create their own marketing content are at risk. Without effective detection tools, the 33% of surveyed Canadian marketers who hire creative agencies or freelancers to produce content can’t be sure whether those third parties are using GenAI. In fact, the majority (83%) of marketers who outsource are moderately to very concerned that they might be unwitting recipients of AI output.

All of this uncertainty leaves Canadian companies in a tricky position, with little incentive to label their GenAI content. And though many marketers say AI labels are a good thing, their actions indicate otherwise.

Consumer distrust of GenAI disincentivizes labelling

Over three-quarters (79%) of Canadian marketers say AI labels would improve their social media performance, yet so few actually apply them. As illustrated above, consumer perception complicates the decision to label AI-generated content. Here are a few more reasons why Canadian companies may be noncompliant:

  • AI slop could lead to social media abstention. Businesses don’t want to be seen as contributing to an influx of “AI slop”—mediocre, unwanted GenAI content on the internet. [5] If consumers are statistically less likely to engage with AI-generated content, then why would businesses label it as such? Analysts predict that by 2025, a perceived decline in social media content quality related to GenAI will prompt half of consumers to significantly decrease their use of social media platforms. [6] Such a change would tank the return on investment (ROI) of GenAI tools, not to mention a whole host of social media marketing software investments. Businesses are hoping that their AI-generated content will be interesting enough that audiences won’t know or care that it wasn’t made by humans.
  • Competitors don’t label their content. Nearly one in three Canadian marketers that use GenAI says it’s yielded a competitive advantage.* As competitive pressures lead more companies to adopt GenAI, some marketers may omit labels in an effort to appear more authentic than others who are transparent about their use of GenAI. They’ll get away with it until social platforms introduce reliable AI content detectors. Until then, labelling will continue to rely on companies using labels in good faith, which not everyone does. [7]
  • Labeling adds a few steps to a not-quite-fully automated workflow. Doing the important work of aligning stakeholders to develop an internal labelling framework and including a labelling step in the GenAI content publishing workflow takes time that could put businesses behind their competitors.

At the current crossroads, Canadian businesses have three options with regard to AI-generated content labelling:

  1. Label GenAI content on social media and face a potential backlash. 
  2. Choose not to label GenAI content on social media and maybe face the consequences if caught. 
  3. Choose not to use GenAI at all, with the option of positioning themselves as an “acoustic” brand that says no to AI. [6] 

None of these options are perfect, and the path forward is unclear. No matter where you are with your decision, you may have to choose sooner rather than later because customers are already responding strongly—and negatively—to the influx of AI content.

Transparency is the way to go with GenAI

All that said, being honest with customers and publishing only high-quality content is always an effective long-term strategy. If your business chooses to use GenAI for marketing, you should do so responsibly and transparently. 

Here are some tips on how to approach GenAI content and labelling on social media.

  • Do comply with social platforms’ stated policies on AI-generated content labelling.
  • Do explore ways that AI can automate the routine tasks your human creatives do with marketing content. For instance, AI-powered grammar checkers or image editing tools can save marketers time that is better spent on complex creative tasks. 
  • Don’t use GenAI to replace human creatives. GenAI can produce content quickly and at scale, but that content needs human intervention before it’s published. 
  • Don’t publish low-quality AI-generated content—in other words, content that’s boring, uncanny, or full of mistakes. Your social media audience will immediately clock that it’s AI-generated and will perceive you as inauthentic.
How top social media companies approach AI content labels

Meta (Instagram, Facebook, and Threads): Meta previously applied a “Made with AI” label onto posts with metadata that indicated the presence of AI-generated content. Following an outcry from photographers whose content was labelled due to their use of digital editing tools, Meta updated its label to simply read “AI info.” [8] Users can click on the label to learn more about why it was applied. 

TikTok: TikTok launched a required AI-generated content label last year, warning creators that their content could be removed if they did not disclose that they had used AI. It soon began testing technology that can automatically label AI-generated content. It now labels AI-generated content with “Content Credentials,” a digital watermark tech that attaches metadata to AI-generated content. [9]

YouTube: As of March 2024, YouTube creators are required to label realistic-looking content that was made using AI. However, the label is not required for certain effects, such as beauty filters or background blurring, or for “clearly unrealistic” content, such as animation. [7]

The right software can benefit your labelling efforts

Software can help businesses manage their labelling efforts. For example, digital asset management software helps companies organize their image and video libraries so they can track which content needs a label when it’s time to publish it on social media. Similarly, brand management software provides a centralized location for storing, distributing, and editing brand guidelines as they evolve with the influence of AI. Brand management software also integrates with social media platforms so businesses can analyze the performance of AI-generated content.


Survey methodology

*Capterra’s GenAI for Social Content Survey was conducted in May 2024 among 1,680 respondents in the U.S. (n: 190), Canada (n: 108), Brazil (n: 179), Mexico (n: 199), the U.K. (n: 197), France (n: 135), Italy (n: 102), Germany (n: 90), Spain (n: 123), Australia (n: 200), and Japan (n: 157). The goal of the study was to learn more about the impacts of generative AI on social media marketing strategies. Respondents were screened for marketing, PR, sales, or customer service roles at companies of all sizes. Each respondent indicated their use of generative AI to assist with their company's social media marketing at least once each month.

Sources

  1. Embassy Takes Down AI-Generated Canada Day Social Media Post, CBC
  2. Canada’s Generative AI Opportunity, Microsoft Canada
  3. Labeling AI-Generated Content: Promises, Perils, and Future Directions, MIT
  4. What Air Canada Lost In ‘Remarkable’ Lying AI Chatbot Case, Forbes
  5. First Came ‘Spam.’ Now, With A.I., We’ve Got ‘Slop’, The New York Times
  6. How Marketing Can Capitalize on AI Disruption, Gartner
  7. YouTube Adds New AI-Generated Content Labeling Tool, The Verge
  8. Instagram’s ‘Made With AI’ Label Swapped Out for ‘AI Info’ After Photographers’ Complaints, The Verge
  9. TikTok Begins Automatically Labeling AI-Generated Content, CNBC