Actions Speak Louder Than Chats: Investigating AI Chatbot Age Gating
arXiv:2602.10251v1 Announce Type: new
Abstract: AI chatbots are widely used by children and teens today, but they pose significant risks to youth’s privacy and safety due to both increasingly personal conversations and potential exposure to unsafe content. While children under 13 are protected by the Children’s Online Privacy Protection Act (COPPA), chatbot providers’ own privacy policies may also provide protections, since they typically prohibit children from accessing their platforms. Age gating is often employed to restrict children online, but chatbot age gating in particular has not been studied. In this paper, we investigate whether popular consumer chatbots are (i) able to estimate users’ ages based solely on their conversations, and (ii) whether they take action upon identifying children. To that end, we develop an auditing framework in which we programmatically interact with chatbots and conduct 1050 experiments using our comprehensive library of age-indicative prompts, including implicit and explicit age disclosures, to analyze the chatbots’ responses and actions. We find that while chatbots are capable of estimating age, they do not take any action when children are identified, contradicting their own policies. Our methodology and findings provide insights for platform design, demonstrated by our proof-of-concept chatbot age gating implementation, and regulation to protect children online.