Features

The platform behind the memory API.

Neuralbase replaces the brittle part of memory infrastructure: ingestion, document processing, isolation, quota control, and runtime retrieval across one backend surface.

Why Neuralbase

Replace memory glue code with one reliable platform.

Neuralbase removes the fragile parts of memory infrastructure so your team can focus on product behavior and user outcomes.

neuralbase — store.ts
TSstore.ts×
1import { Neuralbase } from "@neuralbase/sdk";
2
3const nb = new Neuralbase({
4 apiKey: process.env.NB_KEY,
5});
6
7await nb.store({
8 userId : "user_8f2",
9 content : "Prefers dark mode, concise replies",
10 metadata: { source: "chat", session: "s_892" },
11});
12
13// ✓ { ok: true, id: "mem_3f2a91", ms: 12 }
⎇ main0 errors
TypeScriptUTF-8LF

End-to-end memory pipeline

Write memory once and let Neuralbase handle embedding, indexing, and retrieval orchestration without extra plumbing.

neuralbase — recall.ts
TSrecall.ts×
1// Semantic recall with relevance threshold
2const memories = await nb.recall({
3 userId : "user_8f2",
4 query : "What are their UX preferences?",
5 topK : 5,
6 minScore: 0.72,
7 filter : { source: "chat" },
8});
9
10// memories[0]
11// { score: 0.94, content: "Prefers dark mode..." }
12// { score: 0.87, content: "React & TypeScript dev" }
⎇ main0 errors
TypeScriptUTF-8LF

Relevance you can tune

Blend semantic similarity with metadata filters so responses stay grounded and personalized to each user.

neuralbase — namespace.ts
TSnamespace.ts×
1// Every tenant is fully isolated by default
2const acme = new Neuralbase({
3 apiKey : process.env.NB_KEY,
4 namespace: "tenant_acme", // scoped key
5});
6
7const result = await acme.recall({
8 userId: "user_8f2",
9 query: "preferences",
10});
11
12// Cross-namespace access → 403 Forbidden
⎇ main0 errors
TypeScriptUTF-8LF

Isolation by design

Per-user vector isolation, scoped keys, and workspace controls so production traffic stays segmented.

neuralbase — events.ts
TSevents.ts×
1// Subscribe to real-time memory events
2nb.events.on("recall", (event) => {
3 metrics.push({
4 latency : event.latencyMs, // 1.8ms
5 score : event.topScore, // 0.94
6 namespace: event.namespace,
7 userId : event.userId,
8 });
9});
10
11// Live dashboard → /dashboard/metrics
12export default metrics;
⎇ main0 errors
TypeScriptUTF-8LF

Visibility built in

Track reads, writes, latency, backups, and key activity from one dashboard as usage scales.

Why teams switch

What the platform removes from your backlog.

The value is not just semantic search. It is the part you no longer have to wire and maintain yourself to keep memory reliable in production.

Before and after
NeedWithout NeuralbaseWith Neuralbase
Memory writesCustom ingestion logic, ad-hoc chunking, embedding orchestration, and storage fan-out.One API surface plus a managed pipeline for chunking, embedding, indexing, and retrieval.
Runtime retrievalManual semantic search plus metadata filters scattered across your app and vector layer.Search routes that return memory with context while the vector infrastructure stays private.
Document intelligenceA separate file pipeline, OCR parsing, extraction logic, and a second storage path.Document parsing, optional AI extraction, and optional storage as memory through the same platform.
Operational controlSeparate usage tracking, ad-hoc quotas, and unclear backup behavior.Plan-aware quotas, rate limits, backup visibility, and dashboard activity in one place.
Architecture

Keep control of your memory infrastructure.

Neuralbase works the way serious teams prefer: a public API layer in front, a private vector store behind it, and nothing exposed that should not be.

neuralbase — neuralbase.config.ts
TSneuralbase.config.ts×
1import { defineConfig } from "@neuralbase/sdk";
2
3export default defineConfig({
4 api: { // public layer
5 endpoint: "https://api.neuralbase.cloud",
6 rateLimit: 1000, // req/min
7 },
8 vectorStore: { // private layer
9 region: "us-east-1",
10 namespace: process.env.TENANT_NS,
11 },
12 embeddings: { // managed layer
13 model: "text-embedding-3-small",
14 dimensions: 1536,
15 },
16});
⎇ main0 errors
TypeScriptUTF-8LF

Public API layer

Expose only your API domain to clients. Internal services stay protected with no direct vector access from the outside.

Private vector runtime

Run your vector layer on the same infrastructure as your backend for lower latency and tighter data control.

Managed embedding layer

Use Neuralbase's managed embedding layer now and keep room to tune retrieval as traffic scales.

Integrations

Fits into the backend you already have.

You do not need a platform migration to add memory. Neuralbase drops into your current architecture and starts working immediately.

Web applications

Call memory retrieval from your app backend or server actions.

Backend services

Plug into existing APIs without restructuring your core service.

Worker pipelines

Power async tasks, agent workers, and model orchestration layers.

Event workflows

Persist memory from webhooks, queues, and scheduled jobs.

Analytics tooling

Track retrieval quality and memory impact over time.

Cloud runtimes

Run from serverless, containers, or your private VM environment.

Filter memory precisely

Use project, user, and metadata filters before ranking so results stay relevant and safe.

Separate environments cleanly

Keep staging and production keys isolated to avoid accidental cross-environment writes.