Our commitments to users, society, and the field of mental health
Babyin AI recognizes that AI applications in mental health carry a special responsibility and trust. We hereby solemnly declare our core ethical principles — not merely as promises, but as the foundation of every product decision we make.
We believe technology's value lies in serving human wellbeing. Every feature in Babyin AI is evaluated first by 'Does it genuinely help the user?' — not by 'Can we technically build it?' We refuse to commodify users' emotional struggles or manufacture anxiety to increase engagement.
We clearly communicate to users: Babyin AI is an emotional support tool — not a therapist, not a psychiatric diagnostic system, and not a crisis intervention service. We display a disclaimer at the start of every conversation and immediately guide users to professional help when crisis signals are detected. We would rather lose a conversation than give users a false sense of security.
What users share on Babyin AI often represents their most vulnerable, most private moments. We commit: conversations are not used for model training, not sold or shared with third parties, and we do not proactively follow up after sessions end. Users have the right to delete all their data at any time.
We have implemented strict keyword-triggered safety mechanisms. When users express self-harm, harm to others, or extreme distress, the system immediately pauses and provides crisis hotlines. These safety rails cannot be bypassed through any 'roleplay' or 'hypothetical' framing. Protecting user safety is our non-negotiable bottom line.
We do not want users to develop unhealthy emotional dependence on Babyin AI. Our goal is to help users build their own emotional regulation capabilities — not to become their only emotional outlet. We guide users toward building real-world support networks at appropriate moments and recommend professional in-person counseling when necessary.
Mental health support should not be a privilege for the few. We provide a free tier to ensure basic features are accessible to all users. Our AI is trained to maintain a non-judgmental attitude toward all cultural backgrounds, gender identities, and age groups. We continuously audit and eliminate potential biases in our systems.
We welcome users, researchers, and regulators to review our products. We commit to publishing regular transparency reports on our practices in safety, privacy protection, and ethical compliance. If we make mistakes, we will publicly acknowledge and correct them.
If you or someone you know is experiencing a mental health crisis, please contact these professional services immediately:
If you have questions or suggestions about our ethical practices, please reach out:
[email protected]Last updated: March 2026