Skip to main content
A memory is what you put into Deyta Platform when you call remember. It’s a piece of text plus optional metadata.
await deyta.memory.remember({
  namespace_id: "ns_…",
  content: "Marie Curie won the Nobel Prize in Physics in 1903.",
  title: "Curie biography snippet",
  source: "wikipedia.org",
  metadata: { topic: "physics", year: 1903 },
});

The fields

FieldRequiredPurpose
contentYesThe text to store. The only field that affects retrieval.
titleNoShort label that surfaces in console UIs and audit logs.
sourceNoOrigin of the content (URL, document name, system identifier).
metadataNoFree-form key-value pairs. Returned with retrieved chunks.
ontology_idNoPin extraction to a specific ontology. Default ontology is used otherwise.

What you get back

remember returns a RememberResult:
{
  document_id: "doc_…",
  chunks_created: 4,
  entities_extracted: 7,
  relationships_created: 5,
}
The document_id is your handle for this memory. Use it to call forget later.

What lives inside a memory

After ingestion, one remember call produces several pieces of state in the namespace:

Document

The top-level record. Holds the original content, title, source, and metadata exactly as you sent them.

Chunks

Passages of the document, each embedded as a vector. Chunks are what recall returns as raw context.

Entities + relationships

Nodes and edges extracted from the chunks. The graph is namespace-local; entities mentioned across multiple memories are merged.

Forgetting

Calling forget with a document_id removes the document and everything derived from it: chunks, entity mentions sourced from this document, and relationships sourced from this document. Entities that were also mentioned by other memories survive — only the link to this document is severed.
await deyta.memory.forget({
  namespace_id: "ns_…",
  document_id: "doc_…",
});
forget is destructive and irreversible. There is no undo and no soft-delete. Use it deliberately.

Idempotency and re-ingestion

remember does not deduplicate by content — calling it twice with the same string produces two documents. If you need idempotency against your own source-of-truth ID, design your application to track which content has already been ingested, or use the forget + remember pattern for updates. The TypeScript SDK and gateway API both surface this as a deliberate “the second call wins” contract.

Ontology

An ontology is a list of entity types and relationship types you want extraction to look for. The defaults are general-purpose (PERSON, ORGANIZATION, CONCEPT, LOCATION). Domain work usually wants more specific types — for example, a medical research ontology might define DRUG, CONDITION, TRIAL, with relationships like TREATS and STUDIED_IN. You manage ontologies in the console under Admin. Pass ontology_id on remember to pin extraction for a specific document; otherwise the namespace’s default applies.

Metadata and recall

metadata is not indexed for filtering at the query level. It’s stored alongside chunks and returned in recall results so your application can post-filter or display attribution. If you need filtering, encode the filter target in content itself (e.g., prepend "[topic: physics] …") so it’s searchable.

What’s next

Data flow

Trace a memory from ingestion through retrieval.

Data sources

Connect a data source to populate memories continuously.