ETHICS & VALUES

Ethics Declaration

Our commitments to users, society, and the field of mental health

Babyin AI recognizes that AI applications in mental health carry a special responsibility and trust. We hereby solemnly declare our core ethical principles — not merely as promises, but as the foundation of every product decision we make.

01

Human-Centered, Not Technology-Centered

We believe technology's value lies in serving human wellbeing. Every feature in Babyin AI is evaluated first by 'Does it genuinely help the user?' — not by 'Can we technically build it?' We refuse to commodify users' emotional struggles or manufacture anxiety to increase engagement.

02

Transparent Boundaries — We Don't Overstep

We clearly communicate to users: Babyin AI is an emotional support tool — not a therapist, not a psychiatric diagnostic system, and not a crisis intervention service. We display a disclaimer at the start of every conversation and immediately guide users to professional help when crisis signals are detected. We would rather lose a conversation than give users a false sense of security.

03

Privacy Protection — No Compromise

What users share on Babyin AI often represents their most vulnerable, most private moments. We commit: conversations are not used for model training, not sold or shared with third parties, and we do not proactively follow up after sessions end. Users have the right to delete all their data at any time.

04

Safety Rails — Cannot Be Bypassed

We have implemented strict keyword-triggered safety mechanisms. When users express self-harm, harm to others, or extreme distress, the system immediately pauses and provides crisis hotlines. These safety rails cannot be bypassed through any 'roleplay' or 'hypothetical' framing. Protecting user safety is our non-negotiable bottom line.

05

Prevent Dependency — Encourage Growth

We do not want users to develop unhealthy emotional dependence on Babyin AI. Our goal is to help users build their own emotional regulation capabilities — not to become their only emotional outlet. We guide users toward building real-world support networks at appropriate moments and recommend professional in-person counseling when necessary.

06

Equitable Access — No Discrimination

Mental health support should not be a privilege for the few. We provide a free tier to ensure basic features are accessible to all users. Our AI is trained to maintain a non-judgmental attitude toward all cultural backgrounds, gender identities, and age groups. We continuously audit and eliminate potential biases in our systems.

07

Ongoing Accountability — Open to Scrutiny

We welcome users, researchers, and regulators to review our products. We commit to publishing regular transparency reports on our practices in safety, privacy protection, and ethical compliance. If we make mistakes, we will publicly acknowledge and correct them.

Crisis Resources

If you or someone you know is experiencing a mental health crisis, please contact these professional services immediately:

988 Suicide & Crisis Lifeline (US): Call or text 988
Crisis Text Line (US/CA/UK): Text HOME to 741741
Samaritans (UK/IE): 116 123
Lifeline (AU): 13 11 14

Contact Us

If you have questions or suggestions about our ethical practices, please reach out:

[email protected]

Last updated: March 2026