Data retention policy
The platform applies tiered retention based on data sensitivity. User-generated content remains under user control and can be deleted via DSAR requests. Operational telemetry auto-rotates within 30-90 days, while identity data persists for the account lifetime plus a 30-day grace period after deletion. Meeting data defaults to 90 days but is configurable per organization.
Key Points:
User content (messages, notes, contacts) - user-controlled, deletable on request
Identity data - kept for account lifetime + 30 days post-deletion
Logs/metrics - automatically purged after 30-90 days
Meeting transcripts - default one year, org-configurable if longer is needed
Org configuration - cleared immediately when organization is deleted
DSAR support - users can request data export or deletion
Data archiving and removal policy
- Deletions propagate to backups/DR via scheduled rotation so that data is fully removed within the same 7-30 day window after account closure or verified erasure requests.
- Verified GDPR data subject requests (access/export, rectification, erasure, restriction) are supported and fulfilled within regulatory timelines.
- Subprocessors are contractually required to honor equivalent deletion obligations.
Data storage policy
All platform data resides in AWS infrastructure (us-west-2/us-east-2 regions) across three primary storage tiers: PostgreSQL/vectorDB, S3 for files and documents, and Redis for caching and message queues. All storage is encrypted at rest using AES-256, and sensitive fields (like reflection content) have additional application-level encryption. Data is segregated by organization, with role-based access controls enforced throughout.
Key Points:
Primary database - AWS RDS PostgreSQL + vectorDB
Object storage - AWS S3 for files, recordings, and documents
Cache/queues - AWS ElastiCache (Redis) for sessions and message streams
Encryption at rest - AES-256 across all storage layers (RDS, S3, ElastiCache)
Field-level encryption - sensitive content (reflections, etc.) encrypted at application layer
Data segregation - organization-based isolation via org_user_mapping
Region - all data stored in US (us-west-2 / us-east-2)
Access control - least privilege, scoped API keys, OAuth2 based service-to-service auth.
Data center location(s)
United States
Data hosting details
Cloud hosted
App/service has sub-processors
yes
Guidelines for sub-processors
App/service uses large language models (LLM)
yes
LLM model(s) used
ChatGPT, Claude, Gemini
LLM retention settings
We use LLMs in the backend but do not directly expose the model to customers nor storing anything to the LLM providers.
LLM data tenancy policy
We use LLMs in the backend but do not directly expose the model to customers nor storing anything to the LLM providers.
LLM data residency policy
We use LLMs in the backend but do not directly expose the model to customers.We use LLMs in the backend but do not directly expose the model to customers nor storing anything to the LLM providers.