product

two layers
doing the work.

two layers. one running underneath everything, structuring what your company knows. the other exposing it to every AI tool your team uses.

architecture

one memory.
every surface.

click any node. the divider pulses. the retrieval layer reads from the same memory every ingestion source writes into.

retrieval layer
claude
chatgpt
cursor
slack
web app
api
ingestion layer
slack
notion
drive
gmail
hubspot
meetings
capabilities

multimodal capabilities.

multimodal inputs are handled natively. text, images, video, audio. contracts, decks, scanned documents, and meeting recordings become queryable content.

text

documents, notes, messages

contracts, memos, slack threads, notion pages — all parsed, linked, and cited.

images

screenshots, diagrams, scans

scanned documents, whiteboard photos, and product screenshots become queryable.

video

recordings & demos

recorded calls and product walkthroughs get transcribed, summarised, and indexed.

audio

voice notes & dictation

voice memos and podcast clips flow into the same knowledge layer as text.

pdf

contracts, decks, reports

layout-aware parsing pulls structure from pdfs — tables, clauses, figures.

meetings

transcripts with speakers

who said what, when. decisions and commitments surfaced and linked to the account.

open standards & portability

every layer is open.

markdown repo as source of truth. postgres with vector search for retrieval. open standards for the interface. every layer is open. no proprietary lock-in. you can take the knowledge layer with you if you leave.

faq

product & tech.

how is this different from a rag setup or an internal wiki?

rag re-derives knowledge on every query. a wiki sits until someone updates it. the knowledge layer compiles once, keeps current automatically, pre-builds cross-references, flags contradictions at ingest, and compounds over time.

what does the underlying architecture look like?

markdown repo as source of truth, postgres with vector search for retrieval, open standards for the interface. every layer is open. no proprietary lock-in. you can take the knowledge layer with you if you leave.

can it read pdfs, images, and meeting videos?

yes. multimodal inputs are handled natively. text, images, video, audio. contracts, decks, scanned documents, and meeting recordings become queryable content.

see it on your sources.

90 minutes with your leadership team. we map your sources, your AI usage, and the three queries that would move the needle most. you walk out with a scope and a written plan within 48 hours.