Meaamor Logo Meaamor
Policy Document

Standards Against Child Sexual Abuse and Exploitation (CSAE)

Maneged Mobile Application

Effective Date February 22, 2026
Last Reviewed February 22, 2026
Version 1.0
01

Purpose and Commitment

Meaamor is a relationship enhancement platform designed for couples to deepen their connection through quizzes, shared timelines, and intimate communication. We are firmly committed to the safety and protection of all users, especially children, and maintain a zero-tolerance policy toward child sexual abuse and exploitation (CSAE) in any form across our platform.

This document establishes the published standards, policies, and procedures that Meaamor implements to prevent, detect, and respond to CSAE content and behavior within the application.

02

Scope

These standards apply to:

  • All user-generated content shared through the platform (messages, photos, drawings, notes)
  • All forms of communication facilitated by the platform (individual chats, partner interactions)
  • All user profiles and account information
  • All calendar events and relationship timeline entries created within the platform
  • All employees, contractors, and third-party service providers associated with Meaamor
03

Definitions

Term Definition
CSAM Child Sexual Abuse Material — any visual, textual, or digital content that depicts a minor in sexually explicit conduct
CSAE Child Sexual Abuse and Exploitation — the sexual abuse, exploitation, or trafficking of children in any form
Grooming The deliberate process by which an offender builds a relationship with a child to facilitate sexual abuse or exploitation
Minor/Child Any individual under the age of 18 years
AIT-CSAM AI-generated or AI-manipulated imagery depicting a minor in sexually explicit scenarios
Sextortion Coercion of a minor to produce or share sexual content through threats, manipulation, or blackmail
04

Prohibited Conduct

The following conduct is strictly prohibited on the Meaamor platform:

4.1 Content-Related Prohibitions

  • Sharing, uploading, distributing, or storing child sexual abuse material (CSAM) in any format — including images, videos, documents, or text — through chats, file attachments, or any other feature
  • Creating, soliciting, or facilitating the production of CSAM
  • Sharing AI-generated CSAE content (AIT-CSAM), including deepfakes or synthetic media involving minors
  • Distributing links to external sites or services hosting CSAM

4.2 Behavioral Prohibitions

  • Engaging in grooming behavior toward minors through individual chat features
  • Using the platform to solicit, recruit, or coerce minors for sexual purposes
  • Sextortion or blackmail of minors via messaging or file-sharing features
  • Using calendar scheduling, notes, or any other feature to coordinate or facilitate the exploitation of children
  • Creating chats or conversations for the purpose of sharing or discussing CSAE content
  • Attempting to circumvent CSAE detection or prevention systems
05

Prevention Measures

5.1 Age Verification and Account Controls

  • Users must be at least 13 years of age (or the minimum age required by their jurisdiction) to create an account
  • Age verification is enforced during the registration process
  • Parents and guardians are responsible for monitoring their children's online activity

5.2 Content Moderation and Detection

  • Proactive Scanning: All file attachments (images, drawings, notes) shared through the messaging system are subject to automated scanning using industry-standard detection technologies, including hash-matching against known CSAM databases (e.g., NCMEC hash lists, PhotoDNA)
  • Text-based Detection: Keyword and pattern analysis is applied to chat messages to identify grooming language and CSAE-related terminology
  • AI-based Classification: Machine learning models are employed to flag potentially exploitative content or behavioral patterns
  • Human Review: Flagged content is escalated to trained trust and safety reviewers for assessment

5.3 Privacy-Preserving Safeguards

  • Detection mechanisms are designed to minimize privacy intrusion for legitimate users
  • False-positive flagged content is handled with strict confidentiality and promptly cleared
  • Detection systems are regularly audited for accuracy and proportionality

5.4 Platform Design Safeguards

  • Role-Based Access Control: Platform administrators and moderators can manage accounts and remove harmful users
  • Reporting Tools: In-app reporting mechanisms allow users to flag suspicious content or behavior directly from chats and user profiles
  • Block and Restrict: Users can block individuals who exhibit harmful behavior
  • Limited Discoverability: User profiles are not publicly searchable to prevent unsolicited contact
06

Detection and Reporting Procedures

6.1 Internal Detection

When CSAE content or behavior is detected — either through automated systems or user reports — the following procedure is activated:

  1. Immediate Containment: The flagged content is quarantined and made inaccessible to other users
  2. Account Suspension: The offending account is suspended pending investigation
  3. Evidence Preservation: All relevant data (messages, files, metadata, timestamps, user information) is preserved in a secure, immutable log for law enforcement purposes
  4. Internal Review: A trained trust and safety team member reviews the case within 24 hours

6.2 Mandatory External Reporting

In compliance with applicable laws (including U.S. 18 U.S.C. § 2258A and equivalent international legislation):

  • NCMEC CyberTipline: Confirmed CSAM is reported to the National Center for Missing & Exploited Children (NCMEC) within 24 hours of confirmation
  • Law Enforcement: Reports are made to the appropriate local, national, or international law enforcement agencies as required by jurisdiction
  • International Coordination: For cross-border cases, coordination is facilitated through INTERPOL and relevant national agencies
  • Indian Law (IT Act, 2000): In compliance with Section 67B of the Information Technology Act, 2000 (India), any CSAE content is reported to the Indian Computer Emergency Response Team (CERT-In) and relevant law enforcement

6.3 User Reporting Mechanisms

Users are encouraged and empowered to report CSAE-related concerns through:

  • In-App Reporting: A dedicated report button accessible from message bubbles, user profiles, and settings
  • Email: A dedicated email address (support@maneged.com) monitored by the trust and safety team
  • Help Center: Guidance on identifying and reporting CSAE content available in the in-app Help Center
  • Anonymous Reporting: Users may submit reports anonymously where local law permits
07

Enforcement Actions

7.1 For Confirmed CSAE Violations

Action Detail
Immediate account termination The offending account is permanently banned with no option for reinstatement
Content removal All CSAE content is removed from the platform immediately upon confirmation
Device blacklisting Devices associated with the offending account may be blocked from re-registration
Cross-platform notification Where applicable, partner platforms and relevant industry databases are notified
Law enforcement referral All evidence is provided to law enforcement

7.2 For Grooming or Suspicious Behavior

Action Detail
Account suspension Pending investigation and review
Conversation restrictions Ability to initiate new chats or join teams is revoked during investigation
Monitoring Enhanced monitoring may be applied where legally permitted
Escalation If grooming is confirmed, the account is terminated and reported to law enforcement

7.3 Appeals Process

  • Users subject to enforcement actions for CSAE violations have no right of appeal for confirmed violations involving CSAM
  • For accounts flagged due to false-positive detection, users may submit an appeal to support@maneged.com within 30 days
08

Data Retention and Cooperation with Law Enforcement

  • Evidence Retention: CSAE-related data and evidence is retained for a minimum of 1 year (or longer if required by applicable law or ongoing investigation) in a secure, encrypted, access-controlled storage environment
  • Law Enforcement Cooperation: Meaamor cooperates fully and promptly with law enforcement agencies investigating CSAE cases, providing data and evidence as legally required
  • Preservation Requests: We honor valid legal preservation requests and ensure data is not destroyed during active investigations
  • Transparency Reports: Meaamor publishes periodic transparency reports detailing the volume of CSAE reports made, actions taken, and detection statistics (without compromising ongoing investigations or victim identities)
09

Employee and Contractor Standards

9.1 Training

  • All employees with access to user data or content moderation responsibilities must complete mandatory CSAE awareness training upon onboarding and annually thereafter
  • Training covers identification of CSAM, grooming behaviors, reporting obligations, trauma-informed handling of content, and mental health support

9.2 Background Checks

  • Employees and contractors with access to user data, content moderation, or trust and safety roles undergo comprehensive background checks

9.3 Employee Wellbeing

  • Trust and safety reviewers who review flagged content have access to counseling services, mandatory wellness breaks, and psychological support programs
  • Exposure limits are enforced to prevent secondary traumatization
10

Third-Party and Infrastructure Obligations

  • Cloud and Backend Providers (Appwrite, Firebase): All third-party infrastructure providers used by Maneged are required to maintain their own CSAE policies and cooperate with CSAE detection and reporting efforts
  • Push Notification Services (FCM): Notification content is reviewed to prevent use as a CSAE distribution channel
  • CDN and File Storage: All stored files are subject to the same scanning and moderation standards as real-time uploads
11

Governance and Review

11.1 Standards Review Cycle

These standards are reviewed and updated at least once annually, or immediately in response to:

  • Changes in applicable law or regulation
  • Emerging CSAE threats or tactics
  • Significant platform feature changes (e.g., video/voice calling integration, new file-sharing capabilities)
  • Recommendations from child safety organizations

11.2 Responsible Team

Role Responsibility
Chief Safety Officer Overall ownership of CSAE policy and compliance
Trust & Safety Lead Operational management of content moderation and reporting
Engineering Lead Implementation and maintenance of detection technologies
Legal Counsel Regulatory compliance and law enforcement coordination

11.3 External Advisory

Maneged seeks guidance from and aligns with leading child safety organizations including:

  • National Center for Missing & Exploited Children (NCMEC)
  • Internet Watch Foundation (IWF)
  • International Centre for Missing & Exploited Children (ICMEC)
  • Tech Coalition (industry collaboration against CSAE)
  • WePROTECT Global Alliance
12

Alignment with Industry Standards and Legal Frameworks

These standards are developed in alignment with:

Framework Relevance
U.S. PROTECT Act (2003) Criminalizes virtual and AI-generated CSAM
U.S. 18 U.S.C. § 2258A Mandatory CSAM reporting obligations for electronic service providers
EU Directive 2011/93/EU Combating sexual abuse and exploitation of children
UK Online Safety Act (2023) Duty of care for platforms regarding child safety
India IT Act, 2000 (Section 67B) Prohibition of CSAE content in electronic form
UNCRC (Article 34) UN Convention on the Rights of the Child — protection from sexual exploitation
Voluntary Principles to Counter CSAE Industry best practices framework by Tech Coalition
Santa Clara Principles Transparency in content moderation practices
13

Contact Information

For CSAE-related concerns, reports, or inquiries:

Channel Contact
In-App Report Available from any message, profile, or group screen
Safety Email support@maneged.com
Appeals support@maneged.com
Legal / Law Enforcement support@maneged.com
NCMEC CyberTipline https://report.cybertip.org
14

User Acknowledgment

By creating an account and using the Maneged platform, all users acknowledge and agree to comply with these CSAE standards. Violations will result in enforcement actions as outlined in Section 7, including permanent account termination and referral to law enforcement.