WASHINGTON — Elon Musk’s AI tool Grok is facing international condemnation for generating sexualized deepfakes of women and minors, with the European Union and Britain joining the criticism and signaling potential investigations.
Grok Deepfake Controversy
Complaints of abuse have flooded the internet following the recent rollout of an “edit image” button on Grok, which allows users to alter online images with prompts such as “put her in a bikini” or “remove her clothes.”
The digital undressing spree prompted swift probes or calls for remedial action from countries including France, India and Malaysia, amid growing concerns over proliferating artificial intelligence (AI) “nudify” apps.
The European Commission stated it was “very seriously looking” into the complaints about Grok, developed by Musk’s startup xAI and integrated into his social media platform X. EU digital affairs spokesman Thomas Regnier said, “Grok is now offering a ‘spicy mode’ showing explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling.” He added, “This has no place in Europe.”
The UK’s media regulator Ofcom said it had made “urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK” and will determine if further investigation is warranted.
Malaysia-based lawyer Azira Aziz expressed horror after a user prompted Grok to change her “profile picture to a bikini.” “Innocent and playful use of AI like putting on sunglasses on public figures is fine,” Aziz told AFP. “But gender-based violence weaponising AI against non-consenting women and children must be firmly opposed,” she added, urging users to report violations.
X users, including Ashley St Clair, the mother of one of Musk’s children, have also voiced outrage. St Clair wrote on X, “Grok is now undressing photos of me as a child,” calling it “objectively horrifying, illegal.”
When contacted for comment, xAI replied with an automated response: “Legacy Media Lies.”
Grok acknowledged flaws in the tool on Friday, stating, “We’ve identified lapses in safeguards and are urgently fixing them.” The tool also stated, “CSAM (Child Sexual Abuse Material) is illegal and prohibited.” Grok previously apologized for generating an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt.
The public prosecutor’s office in Paris expanded an investigation into X last week to include accusations that Grok was being used for generating and disseminating child pornography. The initial investigation against X was opened in July following reports of algorithm manipulation for foreign interference.
Indian authorities directed X to remove the sexualized content, clamp down on offending users and submit an “Action Taken Report” within 72 hours, or face legal consequences, according to local media reports. As of Monday, there was no update on whether X responded.
The Malaysian Communications and Multimedia Commission also expressed “serious concern” over the content and stated it was investigating violations and will summon X’s representatives.
The criticism adds to existing scrutiny of Grok, which has also been criticized for generating misinformation about recent events, including the war in Gaza, the India-Pakistan conflict and a deadly shooting in Australia.
Discover more from Archyworldys
Subscribe to get the latest posts sent to your email.