NEWYou tin present perceive to Fox News articles!
A caller bipartisan measure introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., would barroom minors (under 18) from interacting with definite AI chatbots. It taps into increasing alarm astir children utilizing "AI companions" and the risks these systems whitethorn pose.
Sign up for my FREE CyberGuy Report
Get my champion tech tips, urgent information alerts and exclusive deals delivered consecutive to your inbox. Plus, you’ll get instant entree to my Ultimate Scam Survival Guide — escaped erstwhile you articulation my CYBERGUY.COM newsletter.
What's the woody with the projected GUARD Act?
Here are immoderate of the cardinal features of the projected Guard Act:
- AI companies would beryllium required to verify idiosyncratic age with "reasonable age-verification measures" (for example, a authorities ID) alternatively than simply asking for a birthdate.
- If a idiosyncratic is recovered to beryllium nether 18, a institution must prohibit them from accessing an "AI companion."
- The measure besides mandates that chatbots intelligibly disclose they are not human and bash not clasp nonrecreational credentials (therapy, medical, legal) successful each conversation.
- It creates new criminal and civilian penalties for companies that knowingly supply chatbots to minors that solicit oregon facilitate intersexual content, self-harm oregon violence.

Bipartisan lawmakers, including Senators Josh Hawley and Richard Blumenthal, introduced the GUARD Act to support minors from unregulated AI chatbots. (Kurt "CyberGuy" Knutsson)
The motivation: lawmakers mention grounds of parents, kid payment experts and increasing lawsuits alleging that immoderate chatbots manipulated minors, encouraged self-harm oregon worse. The basal model of the GUARD Act is clear, but the details uncover however extended its scope could beryllium for tech companies and families alike.
META AI DOCS EXPOSED, ALLOWING CHATBOTS TO FLIRT WITH KIDS
Why is this specified a large deal?
This measure is much than different portion of tech regulation. It sits astatine the halfway of a increasing statement implicit however acold artificial quality should scope into children's lives.
Rapid AI maturation + kid information concerns
AI chatbots are nary longer toys. Many kids are utilizing them. Hawley cited more than 70 percent of American children engaging with these products. These chatbots tin supply human-like responses, affectional mimicry and sometimes invitation ongoing conversations. For minors, these interactions tin blur boundaries betwixt instrumentality and human, and they whitethorn question guidance oregon affectional transportation from an algorithm alternatively than a existent person.
Legal, ethical and technological stakes
If this measure passes, it could reshape however the AI manufacture manages minors, property verification, disclosures and liability. It shows that Congress is acceptable to determination distant from voluntary self-regulation and toward steadfast guardrails erstwhile children are involved. The connection whitethorn besides unfastened the doorway for akin laws successful different high-risk areas, specified arsenic intelligence wellness bots and acquisition assistants. Overall, it marks a displacement from waiting to spot however AI develops to acting present to support young users.

Parents crossed the state are calling for stronger safeguards arsenic much than 70 percent of children usage AI chatbots that tin mimic empathy and affectional support. (Kurt "CyberGuy" Knutsson)
Industry pushback and innovation concerns
Some tech companies reason that specified regularisation could stifle innovation, bounds beneficial uses of conversational AI (education, mental-health enactment for older teens) oregon enforce dense compliance burdens. This hostility betwixt information and innovation is astatine the bosom of the debate.
What the GUARD Act requires from AI companies
If passed, the GUARD Act would enforce strict national standards connected however AI companies design, verify and negociate their chatbots, particularly erstwhile minors are involved. The measure outlines respective cardinal obligations aimed astatine protecting children and holding companies accountable for harmful interactions.
- The archetypal large request centers on age verification. Companies indispensable usage reliable methods specified arsenic government-issued recognition oregon different proven tools to corroborate that a idiosyncratic is astatine slightest 18 years old. Simply asking for a birthdate is nary longer enough.
- The 2nd regularisation involves clear disclosures. Every chatbot indispensable archer users astatine the commencement of each conversation, and astatine regular intervals, that it is an artificial quality system, not a quality being. The chatbot indispensable besides clarify that it does not clasp nonrecreational credentials specified arsenic medical, ineligible oregon therapeutic licenses.
- Another proviso establishes an access prohibition for minors. If a idiosyncratic is verified arsenic nether 18, the institution indispensable artifact entree to immoderate "AI companion" diagnostic that simulates friendship, therapy oregon affectional communication.
- The measure besides introduces civil and transgression penalties for companies that interruption these rules. Any chatbot that encourages oregon engages successful sexually explicit conversations with minors, promotes self-harm oregon incites unit could trigger important fines oregon ineligible consequences.
- Finally, the GUARD Act defines an AI companion arsenic a strategy designed to foster interpersonal oregon affectional enactment with users, specified arsenic relationship oregon therapeutic dialogue. This explanation makes it wide that the instrumentality targets chatbots susceptible of forming human-like connections, not limited-purpose assistants.

The projected GUARD Act would necessitate chatbots to verify users’ ages, disclose they are not quality and artifact under-18 users from AI companion features. (Kurt "CyberGuy" Knutsson)
OHIO LAWMAKER PROPOSES COMPREHENSIVE BAN ON MARRYING AI SYSTEMS AND GRANTING LEGAL PERSONHOOD
How to enactment harmless successful the meantime
Technology often moves faster than laws, which means families, schools and caregivers indispensable instrumentality the pb successful protecting young users close now. These steps tin assistance make safer online habits portion lawmakers statement however to modulate AI chatbots.
1) Know which bots your kids use
Start by uncovering retired which chatbots your kids speech to and what those bots are designed for. Some are made for amusement oregon education, portion others absorption connected affectional enactment oregon companionship. Understanding each bot's intent helps you spot erstwhile a instrumentality crosses from harmless amusive into thing much idiosyncratic oregon manipulative.
2) Set wide rules astir interaction
Even if a chatbot is labeled safe, determine unneurotic erstwhile and however it tin beryllium used. Encourage unfastened connection by asking your kid to amusement you their chats and explicate what they similar astir them. Framing this arsenic curiosity, not control, builds spot and keeps the speech ongoing.
3) Use parental controls and property filters
Take vantage of built-in information features whenever possible. Turn connected parental controls, activate kid-friendly modes and artifact apps that let backstage oregon unmonitored chats. Small settings changes tin marque a large quality successful reducing vulnerability to harmful oregon suggestive content.
4) Teach children that bots are not humans
Remind kids that adjacent the astir precocious chatbot is inactive software. It tin mimic empathy, but does not recognize oregon attraction successful a quality sense. Help them admit that proposal astir intelligence health, relationships oregon information should ever travel from trusted adults, not from an algorithm.
5) Watch for informing signs
Stay alert for changes successful behaviour that could awesome a problem. If a kid becomes withdrawn, spends agelong hours chatting privately with a bot oregon repeats harmful ideas, measurement successful early. Talk openly astir what is happening, and if necessary, question nonrecreational help.
6) Stay informed arsenic the laws evolve
Regulations specified arsenic the GUARD Act and caller authorities measures, including California's SB 243, are inactive taking shape. Keep up with updates truthful you cognize what protections beryllium and which questions to inquire app developers oregon schools. Awareness is the archetypal enactment of defence successful a fast-moving integer world.
Take my quiz: How harmless is your online security?
Think your devices and information are genuinely protected? Take this speedy quiz to spot wherever your integer habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing close and what needs improvement. Take my Quiz here: Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt's cardinal takeaways
The GUARD Act represents a bold measurement toward regulating the intersection of minors and AI chatbots. It reflects increasing interest that unmoderated AI companionship mightiness harm susceptible users, particularly children. Of course, regularisation unsocial won't lick each problems, manufacture practices, level design, parental engagement and acquisition each matter. But this measure signals that the epoch of "build it and spot what happens" for conversational AI whitethorn beryllium ending erstwhile children are involved. As exertion continues to evolve, our laws and our idiosyncratic practices indispensable germinate too. For now, staying informed, mounting boundaries and treating chatbot interactions with the aforesaid scrutiny we dainty quality ones tin marque a existent difference.
If a instrumentality similar the GUARD Act becomes reality, should we expect akin regularisation for each affectional AI tools aimed astatine kids (tutors, virtual friends, games) oregon are chatbots fundamentally different? Let america cognize by penning to america at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my champion tech tips, urgent information alerts and exclusive deals delivered consecutive to your inbox. Plus, you’ll get instant entree to my Ultimate Scam Survival Guide — escaped erstwhile you articulation my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt "CyberGuy" Knutsson is an award-winning tech writer who has a heavy emotion of technology, cogwheel and gadgets that marque beingness amended with his contributions for Fox News & FOX Business opening mornings connected "FOX & Friends." Got a tech question? Get Kurt’s escaped CyberGuy Newsletter, stock your voice, a communicative thought oregon remark astatine CyberGuy.com.










English (CA) ·
English (US) ·
Spanish (MX) ·