remember pipeline as direct ingestion, and surface as memories in the chosen namespace.
Three categories of source are supported:
Object stores
S3 (and S3-compatible endpoints). Files in a bucket are ingested as memories.
Data warehouses
Snowflake. Rows from configured queries are ingested.
OAuth integrations
Google Drive and other providers. End users authorize via OAuth; their content syncs into the namespace.
Two surfaces in the console
The console splits the lifecycle across two pages:| Page | What it’s for |
|---|---|
| Integrations | Browse available providers and start a new connection (OAuth or manual). |
| Connections | See and manage every live data source connection across your org — pause, resume, test, delete. |
Object stores
Configure once with bucket credentials. Deyta Platform reads files matching a prefix and ingests their contents.From the console
Open Connections in the sidebar, choose Object store, and provide:| Field | Value |
|---|---|
| Name | Friendly label |
| Namespace | Which namespace the synced content lands in |
| Provider | Currently s3 (S3 and S3-compatible endpoints) |
| Access key ID / secret access key | IAM credentials with read access to the bucket |
| Bucket | The bucket name |
| Region | AWS region |
| Prefix | Optional. Limits sync to keys under this prefix. |
| Endpoint | Optional. For S3-compatible providers (MinIO, R2, etc.). |
Lifecycle
Object store connections support pause and resume from the console. Pausing stops new sync runs without disconnecting; resuming picks up where it left off. The test action is available both before creation (with inline credentials) and on an existing connection — useful for confirming that a credential rotation succeeded.Data warehouses (Snowflake)
Configure a Snowflake connection at the organization level and Deyta Platform can ingest from any namespace’s configured queries.From the console
Open Data warehouses in the sidebar, click Add connection, and provide:| Field | Value |
|---|---|
| Name | Friendly label |
| Account | Snowflake account identifier (e.g., xy12345.us-east-1) |
| Warehouse | Compute warehouse to run queries against |
| Database | Default database |
| Schema | Default schema |
| Username / password | Service account credentials |
Unlike object store connections, Snowflake connections are org-scoped, not namespace-scoped. The same connection can drive ingestion into multiple namespaces, each with its own query configuration.
OAuth integrations
For consumer-cloud sources (Google Drive, etc.), the user authorizes Deyta Platform against their own account via OAuth. Your application drives the flow with the gateway integrations API. The full flow:List enabled providers
Call
GET /integrations/list to see which providers your org has enabled.Start a connect session
Call
POST /integrations/connections/start with the namespace and provider key. Deyta Platform returns a session token and an OAuth redirect URL.User authorizes
Use the
@nangohq/frontend SDK to walk the user through the provider’s OAuth screen. The provider returns a token to your callback.Complete the session
Call
POST /integrations/connections/complete with the OAuth callback values. The connection is now live and sync starts.Listing and removing connections
Provider availability
Which providers are usable in your org is controlled by the Integrators section under Admin. An admin can toggle providers on or off; disabled providers don’t appear in the integrations UI andstart calls return 403.
What’s next
Querying memories
Use
recall and ask against the memories you’ve ingested.Managing memories
Inspect, audit, and forget memories.