top of page

CHILD SAFETY AND CSAE PREVENTION STANDARDS

 

Child Sexual Abuse and Exploitation Prevention Policy

 

Last updated: February 2, 2026

 

1. Policy Statement

 

CatScanR Inc. (“CatScanR”, “we”, “us”, or “our”) is unequivocally committed to protecting children from all forms of sexual abuse and exploitation. We have zero tolerance for Child Sexual Abuse Material (CSAM), child grooming, child trafficking, or any content or behavior that sexualizes, endangers, or exploits minors.

 

This policy outlines our comprehensive approach to preventing, detecting, and responding to child safety risks on our platform. We comply with all applicable laws including:

 

  1. Swiss Federal Act on Data Protection (nFADP)

  2. European Union General Data Protection Regulation (GDPR)

  3. U.S. laws including 18 U.S.C. § 2258A (CSAM reporting requirements)

  4. Apple App Store and Google Play Store child safety requirements

  5. International conventions including the UN Convention on the Rights of the Child

 

2. Scope and Definitions

 

2.1. Who is a Child?

For purposes of this policy, a “child” or “minor” is any person under 18 years of age, regardless of the age of majority in their jurisdiction.

 

2.2. Prohibited Content and Conduct

The following are strictly prohibited on CatScanR:

 

  1. Child Sexual Abuse Material (CSAM): Any visual depiction of sexually explicit conduct involving a minor, including photographs, videos, digital images, and computer-generated images.

  2. Child Sexual Exploitation: Any attempt to sexually exploit a child, including solicitation, enticement, or coercion of a child to engage in sexual activity.

  3. Child Grooming: Any communication or interaction designed to build trust with a child or their guardian for the purpose of sexual abuse or exploitation.

  4. Sexualization of Minors: Any content that sexualizes minors, including suggestive or sexual comments about children, inappropriate images, or content that depicts children in a sexual manner.

  5. Child Trafficking: Any facilitation, promotion, or engagement in child trafficking or exploitation.

  6. Age Misrepresentation: Adults posing as minors or minors misrepresenting their age to access the platform.

  7. Inappropriate Contact: Adults attempting to contact, communicate with, or meet minors for inappropriate purposes.

 

3. Age Restrictions and Verification

 

3.1. Minimum Age Requirement

CatScanR is restricted to users aged 18 and older. We do not knowingly allow children under 18 to create accounts or use our platform.

 

3.2. Age Verification Measures

We implement the following age verification measures:

 

  1. Users must confirm they are 18 or older during registration.

  2. Age verification through third-party authentication providers (Google, Facebook, Apple).

  3. Behavioral analysis to detect potential underage users.

  4. Community reporting mechanisms for suspected underage accounts.

  5. Additional verification requests when suspicious activity is detected.

 

3.3. Immediate Action on Underage Accounts

If we become aware that a user is under 18:

 

  1. The account is immediately suspended.

  2. All user content is deleted.

  3. Personal data is deleted as required by law.

  4. If there are indicators of grooming or exploitation, we report to authorities.

 

4. Content Moderation and Detection

 

4.1. Automated Detection Systems

We employ multiple layers of automated detection:

 

  1. Hash Matching: We use PhotoDNA, MD5, SHA-1, and other hash-matching technologies to detect known CSAM against databases maintained by the National Center for Missing & Exploited Children (NCMEC), Internet Watch Foundation (IWF), and other recognized authorities.

  2. AI/ML Classification: Machine learning models trained to identify potential CSAM, grooming patterns, and inappropriate content involving minors.

  3. Text Analysis: Natural language processing to detect grooming language, solicitation attempts, and inappropriate communications.

  4. Behavioral Signals: Pattern analysis to identify suspicious account behavior consistent with child exploitation.

 

4.2. Human Review

All content flagged by automated systems undergoes human review by trained content moderators who:

 

  1. Are specifically trained in child safety and CSAM identification.

  2. Follow established protocols aligned with industry standards.

  3. Have access to mental health support due to the nature of the work.

  4. Operate under strict confidentiality and security protocols.

 

4.3. User Reporting

We provide multiple channels for users to report concerns:

 

  1. In-app reporting button on every profile and piece of content.

  2. Dedicated email: safety@catscanr.com and roman.babenko@catswoppr.io

  3. Anonymous reporting options to protect reporter identity.

  4. 24/7 monitoring of all reports with priority escalation for child safety issues.

 

5. Platform Design Safety Features

 

5.1. Built-in Safety Measures

CatScanR incorporates safety-by-design principles:

 

  1. Location Privacy: Exact addresses are never displayed publicly — only approximate city-level locations appear on the map.

  2. GPS Verification: PawsUp meetup verification requires physical proximity, making it difficult for online-only predators to exploit the system.

  3. Public-First Design: All profiles and content are public, reducing opportunities for private grooming.

  4. Messaging Transparency: Message history is logged and can be reviewed if reports are filed.

  5. No Private Media Sharing: Photos and videos can only be posted to public profiles, not sent privately in messages.

  6. Rate Limiting: Limits on messaging frequency and PawsUp scans prevent mass contact attempts.

 

5.2. Metadata Retention

For child safety investigations, we retain:

 

  1. Account creation data including IP addresses and device identifiers.

  2. Message logs and interaction histories.

  3. GPS data from PawsUp verifications.

  4. Deleted content (temporarily) for investigation purposes.

 

6. Incident Response Protocol

 

6.1. Immediate Actions

Upon detection or report of CSAM or child exploitation:

 

  1. Within 1 hour: Content is removed and account is suspended.

  2. Within 24 hours: Report filed with NCMEC (U.S.), Swiss Federal Police, and other relevant authorities.

  3. Preservation: All evidence is preserved for law enforcement (minimum 90 days, or longer if requested).

  4. Hash Creation: Detected CSAM is hashed and added to our detection database.

  5. Account Ban: Permanent ban with device fingerprinting to prevent account recreation.

 

6.2. Law Enforcement Cooperation

We fully cooperate with law enforcement including:

 

  1. Providing all requested data within legal timeframes.

  2. Maintaining evidence preservation holds as requested.

  3. Designating a law enforcement liaison for urgent requests.

  4. Responding to emergency disclosure requests within 1 hour.

 

6.3. Mandatory Reporting

We file reports with the following authorities as legally required:

 

  1. United States: National Center for Missing & Exploited Children (NCMEC) via CyberTipline.

  2. Switzerland: Swiss Federal Office of Police (fedpol) and relevant cantonal authorities.

  3. European Union: Europol and national authorities in affected member states.

  4. International: Interpol for cross-border cases.

 

7. Training and Accountability

 

7.1. Team Training

All CatScanR team members receive mandatory training on:

 

  1. Child safety best practices and indicators of abuse.

  2. CSAM identification and reporting procedures.

  3. Grooming tactics and exploitation patterns.

  4. Legal obligations under Swiss, EU, and international law.

  5. Trauma-informed response protocols.

 

Training is updated annually and whenever significant policy or legal changes occur.

 

7.2. Third-Party Audits

We commit to:

 

  1. Annual third-party audits of our child safety measures.

  2. Participation in industry working groups (e.g., Technology Coalition, INHOPE).

  3. Transparency reports disclosing child safety metrics (without compromising investigations).

 

8. User Education and Resources

 

8.1. Safety Resources

We provide users with:

 

  1. In-app safety center with guidelines for safe meetups.

  2. Red flag indicators for recognizing suspicious behavior.

  3. Links to child safety organizations and hotlines.

  4. Clear reporting mechanisms and what happens after a report.

 

8.2. Community Standards

Users agree to:

 

  1. Report any suspected child exploitation immediately.

  2. Not attempt to contact minors through the platform.

  3. Meet only in public places and take safety precautions.

  4. Not share personal information that could endanger children.

 

9. Continuous Improvement

 

We are committed to continuously improving our child safety measures through:

 

  1. Regular security and safety audits.

  2. Adoption of new detection technologies as they become available.

  3. Collaboration with child safety organizations and experts.

  4. Monitoring emerging threats and exploitation tactics.

  5. User feedback on safety features and concerns.

 

10. Contact and Reporting Channels

 

10.1. Report Child Safety Concerns

Email: safety@catscanr.com (monitored 24/7)

In-app: Use the “Report” button on any profile or content

Emergency: If you believe a child is in immediate danger, contact local law enforcement immediately

 

10.2. External Reporting Resources

NCMEC CyberTipline (U.S.): www.cybertipline.org

Swiss Federal Police: +41 58 463 11 23

Europol: www.europol.europa.eu/report-a-crime

Internet Watch Foundation: www.iwf.org.uk

 

10.3. Company Contact

CatScanR Inc.

[Company Address]

[City, Postal Code]

Switzerland

Child Safety Officer: safety@catscanr.com / roman.babenko@catswoppr.io

 

11. Policy Updates

 

This policy may be updated as laws, technologies, and best practices evolve. Material changes will be communicated to users. The current version is always available at www.catscanr.com/child-safety.

 

Last reviewed: February 2, 2026

Next scheduled review: August 2, 2026

2-01.png
bottom of page