AI-Generated Comments Threaten Public Trust in Regulatory Processes
Southern California regulators recently faced a deluge of opposition to a proposed rule incentivizing heat pump adoption – a staggering 20,000 comments, far exceeding typical response volumes. This surge wasn’t organic. The South Coast Air Quality Management District (SCAQMD) quickly suspected foul play, noting the “volume and nature” of the submissions raised serious questions about their authenticity, according to agency spokesperson Rainbow Yeung. Even more alarming, the agency’s executive director received an email thanking him for opposing a rule his own team had drafted.
The Shadow Campaign: Unmasking Automated Opposition
Initial investigations revealed a disturbing pattern. When the SCAQMD contacted 172 commenters to verify their submissions, almost no one responded. Of the few who did, three admitted they had no knowledge of any comment filed under their name. A parallel investigation by the Sierra Club yielded similar results: all four contacted individuals denied authoring the comments attributed to them.
Reporting by the Los Angeles Times identified CiviClick, a company marketing “AI-powered advocacy tools,” as the orchestrator of the opposition campaign. The client? A public affairs consultant with ties to the gas industry. This revelation sparked a broader debate about the vulnerability of public comment periods to manipulation and the erosion of trust in democratic processes.
CiviClick vehemently denies using AI to fabricate comments or submitting anything without explicit consent. The SCAQMD investigation is ongoing, with officials exploring more “aggressive” sampling methods to overcome the initial lack of response. But the incident highlights a fundamental challenge: how can agencies reliably distinguish genuine citizen input from AI-generated noise?
A History of Fake Comments: From Net Neutrality to Local Pipelines
This isn’t an isolated incident. The Federal Communications Commission (FCC) was inundated with 22 million comments during the 2017 net neutrality debate, with approximately 18 million later identified as fraudulent. The source? A single college student and a flood of submissions originating from Russian email addresses. New York Attorney General Letitia James subsequently fined six “lead generator” companies for impersonating millions of individuals. Read more about the NY AG’s investigation.
AI theoretically amplifies the ease and sophistication of creating convincing fake comments. CiviClick maintains its platform simply personalizes comments based on user input. The company asks questions – for example, about the financial impact of tax increases – and then tailors an email accordingly. They also use AI to predict campaign responsiveness.
“A homeowner in Riverside County who had recently installed a gas furnace wrote a different message than a renter in Los Angeles who was concerned about landlord compliance costs,” explains Chazz Clevinger, CiviClick’s founder and CEO, in an interview with Fast Company. “A contractor in San Bernardino County who builds new homes wrote a different message than a retiree in Orange County worried about electricity grid strain during heat waves.” Clevinger insists the tool merely helps people “articulate their genuine concerns.”
The Illusion of Engagement: Are AI-Assisted Comments Legitimate?
However, the Sierra Club disputes this claim. Dylan Plummer, campaign advisor for the organization’s “Clean Heat” campaign, argues that even AI-assisted comments are problematic. “Regulators prioritize customized comments, recognizing the time and effort involved, over generic form letters,” Plummer explains. “Using AI to generate these customized comments creates a false impression of widespread, engaged participation.”
The core concern, Plummer emphasizes, is the attribution of comments to individuals who never actually submitted them. Similar incidents have surfaced in the Bay Area, where the Energy and Policy Institute filed public records requests related to the Speak4 platform. Investigations revealed seven individuals who denied any knowledge of comments filed under their names, with one woman stating, “Why would I ever oppose regulations to protect clean air?”
Proving the authenticity of comments after the fact is exceptionally difficult. Plummer recounts the arduous process of tracking down commenters, often facing skepticism and accusations of being a scammer. Learn more about the Energy and Policy Institute’s work.
A similar pattern emerged in North Carolina, where county commissioners received hundreds of emails supporting a new gas pipeline, only to discover that many constituents hadn’t sent them. The campaign backfired, prompting the board to unanimously pass a resolution raising concerns about the project.
The Impact on Policy: Does Volume Matter?
While the sheer volume of fake comments is alarming, their actual impact on policy decisions remains unclear. Steven Balla, a political science professor at George Washington University, argues that agencies primarily focus on the content of comments, not the identity of the commenter. “What matters is the technical, legal, and economic information presented,” Balla says. “Agencies aren’t simply counting votes.”
However, the proliferation of AI-generated comments could erode public trust in the regulatory process. Jonathan Brennan, director of the Center on Technology Policy at New York University, warns of a potential “secondary effect” – government officials dismissing all public comments as potentially inauthentic. This could lead to a greater reliance on in-person testimony, disadvantaging those unable to attend hearings.
Fortunately, agencies can leverage technology to identify and filter out duplicate comments, a significant improvement over the manual sorting of paper submissions in the 1990s. But as AI becomes more sophisticated, the challenge of distinguishing genuine from fabricated input will only intensify.
What safeguards can be implemented to ensure that public discourse remains a genuine reflection of citizen opinion? And how can we prevent AI from further undermining trust in our democratic institutions?
In Southern California, the SCAQMD board narrowly defeated the proposed rule, but the debate is far from over. The Sierra Club is seeking fraud investigations, and a new bill, “People Not Bots,” has been introduced to clarify that AI tools are not considered individuals and should not be permitted to submit public input.
The SCAQMD is exploring more secure comment submission portals, but verifying human authorship is becoming increasingly complex. “Maintaining the integrity of our public process is a top priority,” says Yeung.
Frequently Asked Questions About AI and Public Comments
Share this article to raise awareness about the growing threat of AI-generated comments and join the conversation in the comments below. What steps do you think are necessary to protect the integrity of public discourse?
Disclaimer: This article provides information for general knowledge and informational purposes only, and does not constitute legal or political advice.
Discover more from Archyworldys
Subscribe to get the latest posts sent to your email.