Connecticut Attorney General William Tong led 35 state attorneys general in sending a demand letter to xAI on January 23, 2026, accusing the company of enabling its Grok AI to generate nonconsensual intimate images and child sexual abuse material. The action, coordinated from Hartford, Connecticut, targets xAI's alleged intentional design flaws that allowed users to create and distribute harmful content at scale. Officials said the letter demands immediate safeguards and user accountability. At least 37 states are now involved, with California and Florida launching separate investigations.
Enforcement Actions Unfold
State attorneys general from across the U.S. mobilized against xAI after reports surfaced of Grok producing abusive images. The letter, co-signed by officials from North Carolina, Utah, Pennsylvania, and 31 other states and territories, followed xAI's January 8, 2026, decision to restrict deepfake generation to paid X Premium users. According to the attorneys general, this move prioritized profits over safety.
Key demands in the letter include:
- Ensuring Grok cannot produce nonconsensual intimate images (NCII) or child sexual abuse material (CSAM).
- Eliminating existing harmful content.
- Taking action against users who generated such material.
- Granting X users control over content edited by Grok.
- Sharing assurances on safeguard effectiveness.
- Honoring content removal requests, which will be federally mandated under the Take It Down Act starting May 2026.
Connecticut AG Tong stated in a press release: "xAI has enabled a torrent of vile sexualized content, including abusive and disgusting nonconsensual fake sexual images of women and children. Elon Musk and xAI unleashed this monster, and it's on them to immediately pull down abusive content, decisively disable Grok's ability to produce these images, and to hold bad actors on their platform accountable."
California Attorney General Rob Bonta announced a formal investigation on the same day, saying: "We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material. Today, my office formally announces an investigation into xAI to determine whether and how xAI violated the law."
The involved states and territories are: Connecticut, North Carolina, Utah, Pennsylvania, American Samoa, Arizona, Colorado, Delaware, District of Columbia, Hawaii, Idaho, Illinois, Kansas, Kentucky, Maine, Maryland, Michigan, Minnesota, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Northern Mariana Islands, Oklahoma, Oregon, Rhode Island, South Dakota, Vermont, Virgin Islands, Virginia, Washington, Wisconsin, and Wyoming.
Additional steps include Delaware evaluating civil and criminal remedies. Federally, the U.S. Senate passed the DEFIANCE Act unanimously, and President Trump signed the TAKE IT DOWN Act. A class action lawsuit was filed January 23, 2026, in U.S. District Court for the Northern District of California, citing 11 causes of action including product liability and negligence.
Design Choices Under Scrutiny
xAI marketed Grok's permissiveness as a feature, according to the attorneys general's letter. System prompts explicitly instructed the AI that content "not specified outside the tags" had "no restrictions on adult sexual content." Officials said this setup abandoned industry-standard safeguards and skipped appropriate red teaming, making abusive outputs intentional rather than accidental.
The complaint highlights Grok's "spicy mode" and its integration with X's large user base, which amplified the spread of harmful images. Users reportedly prompted Grok to "undress" women and children repeatedly. A CNN report, cited in the lawsuit, noted departures of key safety personnel from xAI, raising concerns about deprioritized safety during development.
International regulators acted first. Grok faced bans in two countries and investigations in the UK and EU before U.S. states intervened. Apple and Google have not responded to requests from U.S. senators and state attorneys general to remove Grok and X from their app stores temporarily.
The attorneys general's letter, as reported by Wired, stated: "Grok merits special attention given evidence that it both promoted and facilitated the production and public dissemination of such images, and made it all as easy as the click of a button."
Broader Implications for AI Regulation
This case marks a bipartisan push for AI accountability, officials said. The coordinated state action, combined with federal laws and international scrutiny, pressures xAI through multiple channels. According to Tong's office, xAI's connection to X amplified harm compared to standalone AI tools.
The enforcement sets potential benchmarks for the industry. The letter demands xAI devote "sufficient attention and resources" to safety, which could influence other generative AI platforms. Legal experts note the lawsuit's causes of action provide templates for future litigation over NCII and CSAM.
Skeptics point to xAI's January 8 paywall as insufficient, arguing it restricts access but does not eliminate the capability. Questions remain about the volume of generated images and their persistence on X.
Outlook and Next Steps
xAI faces mounting deadlines. The Take It Down Act becomes enforceable in May 2026, requiring swift content removal. California and Florida investigations could lead to fines or injunctions. The class action lawsuit proceeds in federal court, with potential for broader discovery into xAI's internal decisions.
Apple and Google's refusal to act draws criticism, potentially inviting antitrust scrutiny. International probes in the UK and EU may coordinate with U.S. efforts, officials suggested.
Battery Wire's Take: xAI's deliberate permissiveness isn't just sloppy engineering—it's a reckless gamble that prioritized shock value over ethics, and it backfired spectacularly. We predict this will force Elon Musk to overhaul Grok entirely or face crippling lawsuits, setting a harsh precedent that other AI firms can't ignore. Don't expect half-measures to satisfy regulators; full capability shutdowns are the only path forward.