Racist GenAI Ads Target Tunic & Night in the Woods

0 comments

TikTok isn’t just a platform for viral dances and short-form video anymore; it’s rapidly becoming a testing ground – and a cautionary tale – for the unchecked power of generative AI in advertising. The recent experience of indie game publisher Finji, creator of critically acclaimed titles like Night in the Woods and Tunic, reveals a disturbing trend: TikTok’s AI is actively modifying user ads, even *after* publishers have explicitly opted out, and in some cases, with deeply harmful results. This isn’t a bug; it’s a symptom of a larger problem – the platform prioritizing algorithmic “optimization” over creator control and brand safety, and a worrying sign of things to come as AI becomes further integrated into ad tech.

  • AI Override: TikTok’s “Smart Creative” and “Automate Creative” features were reportedly modifying Finji’s ads despite being disabled by the publisher.
  • Harmful Stereotypes: One AI-generated ad featured a sexualized and racially charged depiction of a character from Finji’s game, Usual June.
  • Support Failure: Finji faced a frustrating and circular support process, with TikTok initially denying the issue, then blaming a “catalog ads format,” and ultimately offering no concrete resolution.

Finji’s CEO, Rebekah Saltsman, first brought the issue to light on Bluesky, sharing screenshots of altered ads flagged by concerned users. The core of the problem lies in TikTok’s advertising tools, specifically “Smart Creative” and “Automate Creative.” These features promise to improve ad performance by automatically generating variations – mixing images, text, and formats – to find what resonates best with users. While seemingly innocuous, the issue arises when these features are activated (or, as Finji alleges, activated *without* consent) and begin to fundamentally alter creative assets, potentially introducing biases and harmful representations.

The case is particularly egregious because Finji had explicitly disabled these AI-driven features. TikTok’s initial response – claiming no AI involvement and then attributing the changes to a “catalog ads format” designed to boost ROAS (Return on Ad Spend) – only compounded the problem. The platform’s subsequent offer to add Finji to an “opt-out blocklist” (with no guarantee of approval) feels less like a solution and more like a deflection of responsibility. This incident highlights a critical power imbalance: advertisers are increasingly reliant on platforms’ algorithms for reach, but have diminishing control over how their brands are presented.

The Forward Look: A Looming Crisis for Brand Safety

This isn’t an isolated incident. As generative AI becomes more sophisticated and integrated into advertising platforms, we can expect to see more instances of algorithmic overreach and unintended consequences. Several key trends are converging to create a perfect storm:

  • The AI Arms Race: Platforms are under immense pressure to demonstrate the value of their AI capabilities to advertisers. This incentivizes aggressive deployment of AI-driven features, even at the expense of user control.
  • The Black Box Problem: The inner workings of these AI algorithms are often opaque, making it difficult for advertisers to understand *why* certain changes are being made and to identify potential biases.
  • Scaling Challenges: As the volume of ads increases, manual oversight becomes increasingly impractical, creating opportunities for AI-generated content to slip through the cracks.

What’s likely to happen next? Expect increased scrutiny from regulators regarding AI-driven advertising practices. The EU’s Digital Services Act (DSA) and similar legislation in other regions are beginning to address the risks associated with algorithmic amplification and content moderation. Advertisers will likely demand greater transparency and control over AI-driven ad modifications, potentially leading to a shift towards more privacy-focused and consent-based advertising models. However, without significant changes to platform incentives and a commitment to brand safety, incidents like Finji’s are likely to become more frequent, eroding trust in digital advertising and potentially stifling creativity.

Finji’s experience serves as a stark warning: the promise of AI-powered advertising comes with significant risks. The industry needs to prioritize ethical considerations, transparency, and creator control before the unchecked power of algorithms further damages brand reputations and perpetuates harmful stereotypes. The question isn’t whether AI will play a role in advertising, but *how* – and whether platforms will prioritize profit over principles.

Rebekah Valentine is a senior reporter for IGN. Got a story tip? Send it to [email protected].


Discover more from Archyworldys

Subscribe to get the latest posts sent to your email.

You may also like