Safety Standards

Child Safety and Protection Standards

Effective Date: February 14, 2026

1. Our Commitment to Child Safety

Nuruni is committed to providing a safe environment for adults seeking meaningful relationships. We have zero tolerance for child sexual abuse and exploitation (CSAE) and maintain strict policies and technical safeguards to prevent minors from accessing our platform and to protect all users from harmful content.

Our safety standards comply with international child protection laws and industry best practices.

2. Age Verification and Enforcement

Nuruni is exclusively for adults aged 18 years and older. We enforce this requirement through multiple layers of verification:

  • Age Declaration: All users must confirm they are 18+ during registration
  • Identity Verification: Mandatory selfie verification using AI technology to match profile photos with real-time selfies
  • Photo Review: Our moderation team reviews all profile photos to identify and remove accounts that appear to be minors
  • Document Verification: We reserve the right to request government-issued ID for age verification in cases of suspected underage use
  • Continuous Monitoring: Automated systems and manual reviews to detect suspicious accounts

Any account suspected of belonging to a minor is immediately suspended pending verification. Confirmed violations result in permanent account termination.

3. Zero Tolerance for CSAE

Nuruni maintains absolute zero tolerance for child sexual abuse and exploitation material (CSAM) and related activities:

  • Content Prohibition: Any content depicting, promoting, or soliciting child sexual abuse or exploitation is strictly prohibited
  • Immediate Action: Suspected CSAM is immediately reported to the National Center for Missing & Exploited Children (NCMEC) and relevant law enforcement agencies
  • Permanent Ban: Users who post, share, or solicit CSAM are permanently banned and reported to authorities
  • Proactive Detection: We employ automated detection systems and manual review processes to identify prohibited content
  • User Reports: Easy-to-use reporting tools available on every profile and message

4. Proactive Content Moderation

Our multi-layered content moderation system protects users from harmful content:

Automated Moderation:

  • AI-powered image scanning to detect inappropriate content before it goes live
  • Text analysis to identify grooming patterns, solicitation, and predatory behavior
  • Behavioral analysis to flag suspicious account activity

Human Review:

  • Dedicated moderation team reviewing flagged content 24/7
  • All profile photos reviewed before account activation
  • User-reported content reviewed within 24 hours
  • Regular audits of high-risk accounts and conversations

Content Standards:

  • No explicit, sexual, or suggestive imagery
  • No language that sexualizes minors or promotes CSAE
  • No attempts to move conversations off-platform for illicit purposes
  • No sharing of external links to unsafe or illegal content

5. User Safety Tools and Features

Nuruni provides comprehensive safety tools for all users:

Reporting Mechanisms:

  • Report button on every profile and message
  • Specific category for reporting suspected minors or CSAE
  • Anonymous reporting to protect reporter privacy
  • Confirmation of receipt and action taken when appropriate

Blocking and Privacy:

  • One-tap blocking to prevent unwanted contact
  • Control over profile visibility and information sharing
  • Secure messaging that prevents screenshot capture
  • Option to unmatch and remove all conversation history

Verification Badge:

  • Visual indicator showing verified profiles
  • AI-powered selfie verification to confirm identity
  • Encouragement for all users to complete verification

Education and Resources:

  • Safety tips displayed throughout the app
  • Links to external safety resources and support organizations
  • Guidance on recognizing and reporting suspicious behavior

6. Cooperation with Law Enforcement

Nuruni actively cooperates with law enforcement and child protection agencies:

  • NCMEC Reporting: Suspected CSAM is immediately reported to the National Center for Missing & Exploited Children (NCMEC) as required by law
  • Law Enforcement Requests: We respond promptly to valid legal requests from law enforcement agencies
  • Data Preservation: User data related to CSAE investigations is preserved and provided to authorized agencies
  • Proactive Disclosure: We proactively report suspected criminal activity to relevant authorities
  • Industry Collaboration: We participate in industry initiatives to combat online child exploitation

7. Employee Training and Accountability

Our team is trained and equipped to handle child safety issues:

  • Mandatory Training: All employees receive comprehensive training on child safety policies, CSAE identification, and reporting procedures
  • Content Moderator Support: Specialized training and mental health support for moderation team members
  • Clear Protocols: Documented procedures for handling suspected CSAE with regular updates
  • Regular Audits: Internal audits to ensure compliance with safety standards
  • Accountability Measures: Strict consequences for employees who fail to follow child safety protocols

8. Privacy Protections

While our platform is adults-only, we maintain robust privacy protections:

  • Data Minimization: We collect only necessary information for service provision and safety
  • Secure Storage: All user data is encrypted and stored on secure servers
  • Limited Access: Only authorized personnel can access user information for safety and moderation purposes
  • No Third-Party Sharing: We never sell user data or share it with third parties for marketing
  • Transparent Policies: Clear privacy policy explaining data collection, use, and protection

9. Continuous Improvement

We are committed to continuously improving our safety measures:

  • Technology Updates: Regular updates to detection and moderation systems
  • Policy Review: Periodic review and enhancement of safety policies
  • User Feedback: Incorporation of user feedback to improve safety features
  • Industry Standards: Monitoring and adoption of evolving industry best practices
  • Independent Audits: Regular third-party security and safety audits

10. Reporting and Contact Information

If you encounter any content or behavior that violates our child safety standards:

In-App Reporting:

  • Use the report button on any profile or message
  • Select "Minor on platform" or "Child safety concern"
  • Provide any relevant details to help our investigation

Direct Contact:

Emergency Situations:

  • Contact local law enforcement immediately
  • Report to NCMEC CyberTipline: www.cybertipline.org
  • Then notify us so we can take platform action

Nuruni Technologies Ltd
Dar es Salaam, Tanzania

We take every report seriously and investigate all child safety concerns with the highest priority. Reports are reviewed immediately, and appropriate action is taken within 24 hours.

🛡️

Zero Tolerance for Child Exploitation

Nuruni maintains the highest standards of child safety and protection. Any violation of our child safety policies results in immediate account termination and reporting to law enforcement.

ℹ️

Need Help?

If you have questions about our safety standards or need to report a concern, please contact us at [email protected] or use the in-app reporting tools.