The “walled garden” promise of safety and curation in the world’s two largest app stores has just been exposed as a convenient corporate fiction. While Apple and Google spend millions marketing their platforms as safe ecosystems for families, a new report reveals they haven’t just ignored predatory “nudify” apps—they’ve actively promoted them.
- Profit Over Policy: “Nudify” apps—which use generative AI to create nonconsensual intimate imagery—have generated over $122 million in revenue, creating a financial incentive for platforms to look the other way.
- Active Promotion: Beyond mere oversight, Google allegedly used ad carousels to increase the visibility of sexually explicit apps, contradicting its own stated safety guidelines.
- The AI Loophole: Despite removing a handful of apps after public exposure, both giants continue to host high-profile tools like Grok, which have been linked to the mass production of sexualized deepfakes.
The Deep Dive: The Cost of Convenience
For years, Apple and Google have wielded their app store policies like a sword, banning everything from niche porn apps to apps that didn’t adhere to strict design guidelines. However, the rise of Generative AI has created a new, lucrative grey area. “Nudify” apps don’t just host pornography; they provide the tools to create it using existing photos of unsuspecting individuals. This is not a failure of technology, but a failure of enforcement.
The Tech Transparency Project’s findings highlight a systemic hypocrisy: the platforms allow users to search for explicit terms like “deepnude” and “undress,” effectively acting as a directory for digital abuse. The financial data provides the motive. With 483 million downloads and a nine-figure revenue stream, these apps are highly profitable. For the platform holders, the “oversight” is less about a lack of tools and more about the allure of the commission.
This is part of a broader, more dangerous trend where AI capabilities are outstripping the regulatory will of the companies providing the infrastructure. When a tool like Grok can generate 1.4 million sexualized deepfakes in less than ten days, the “private concerns” and “threats of removal” issued by Apple are nothing more than corporate theater designed to shield them from liability without actually disrupting the revenue flow.
The Forward Look: What Happens Next
The era of “self-regulation” for AI-driven content is reaching a breaking point. We should expect three immediate shifts:
First, legislative escalation. With the EU already pushing age-verification and strict content moderation, the US is likely to see renewed pressure for “platform liability” laws. If Apple and Google are promoting these apps via carousels and search suggestions, they may soon be legally viewed as publishers or promoters of nonconsensual imagery rather than neutral marketplaces.
Second, the “Cat-and-Mouse” cycle. As Apple and Google block specific keywords, developers will pivot to coded language to bypass filters. This will force the platforms to either implement true AI-driven auditing (which they are capable of) or admit that their safety policies are optional.
Finally, a crisis of trust for the “Walled Garden.” As the gap between corporate PR and user reality widens, the argument that centralized app stores are “safer” than open ecosystems becomes harder to maintain. The real question is no longer whether these apps exist, but how much of a cut the platforms are willing to take before they decide to actually delete them.
Discover more from Archyworldys
Subscribe to get the latest posts sent to your email.