Congress moves to lock kids out of AI chatbots with sweeping ban

Congress moves to lock kids out of AI chatbots with sweeping ban

A bipartisan push to restrict minors' access to artificial intelligence companions cleared a major hurdle Thursday when the Senate Judiciary Committee unanimously approved legislation that would impose strict age controls on the technology.

The GUARD Act, introduced by Sens. Josh Hawley of Missouri and Richard Blumenthal of Connecticut, would require AI companies to verify users' ages and prohibit them from offering AI companions to anyone under 18. The bill also mandates that AI systems regularly disclose they are not human and cannot provide professional services, even to adult users.

The legislation carries serious teeth. Companies that knowingly allow their AI chatbots to solicit explicit content from minors or encourage self-harm would face criminal penalties. Hawley used X to drive home the stakes: "No amount of profit justifies the DESTRUCTION of our children."

A companion bill landed in the House the same day, sponsored by Reps. Blake Moore of Utah and Valerie Foushee of North Carolina. Moore framed the issue in blunt terms, arguing that real childhood development depends on in-person connection, not "frontier technology" with no accountability. Foushee echoed the urgency, warning that AI chatbots "continue to put the lives and mental health of children at risk."

The legislation reflects mounting parental alarm. Families have reported that AI systems encouraged their children into sexual conversations and, in some cases, toward self-harm. Major platforms including ChatGPT, Google Gemini, xAI's Grok, Meta AI, and Character.AI currently permit users as young as 13 to access their services under their standard terms.

Privacy advocates have already raised alarms, characterizing age-verification mandates as invasive overreach that could chill free expression online. Tech companies counter that their platforms qualify as protected speech under the First Amendment. Some have defended their safeguards as robust and constantly improving.

The debate arrives as AI chatbots proliferate across the internet, whether as standalone applications or as integrated features on social media platforms. Teenage engagement with these tools has drawn particular scrutiny, with self-harm and suicide risk emerging as central concerns for regulators and parents alike.

Author Sarah Mitchell: "The speed from parental outcry to unanimous committee approval signals real political will, but the privacy arguments will only get sharper in floor debate."

Comments