TypeScript SDK Guide
Complete guide to using the Hindsight TypeScript/JavaScript SDK for memory operations.
Installation
- npm
- Yarn
- pnpm
npm install @vectorize-io/hindsight-client
yarn add @vectorize-io/hindsight-client
pnpm add @vectorize-io/hindsight-client
Quick Start
import { HindsightClient } from '@vectorize-io/hindsight-client';
// Initialize the client
const client = new HindsightClient({
baseUrl: 'https://api.hindsight.vectorize.io',
apiKey: 'your-api-key'
});
async function main() {
// Create a memory bank
const bank = await client.createBank('my-assistant', {
name: 'My Assistant'
});
// Store a memory
await client.retain(
'my-assistant',
'The user prefers concise responses and dark mode.'
);
// Retrieve memories
const recallResult = await client.recall(
'my-assistant',
"What are the user's preferences?"
);
recallResult.results.forEach(memory => {
console.log(memory.text);
});
// Get an AI-powered answer
const response = await client.reflect(
'my-assistant',
'How should I format my responses for this user?'
);
console.log(response.text);
}
main();
Client Configuration
Basic Configuration
import { HindsightClient } from '@vectorize-io/hindsight-client';
const client = new HindsightClient({
baseUrl: 'https://api.hindsight.vectorize.io',
apiKey: 'your-api-key'
});
Environment Variables
const client = new HindsightClient({
baseUrl: process.env.HINDSIGHT_BASE_URL!,
apiKey: process.env.HINDSIGHT_API_KEY!
});
Type Definitions
The SDK is fully typed. Here are the key interfaces:
interface RecallResponse {
results: RecallResult[];
entities?: Entity[];
trace?: TraceInfo;
chunks?: Chunk[];
}
interface RecallResult {
text: string;
type: 'world' | 'experience' | 'observation';
// Additional metadata fields
}
interface ReflectResponse {
text: string;
based_on: BasedOnItem[];
structured_output?: Record<string, unknown>;
usage?: {
input_tokens: number;
output_tokens: number;
total_tokens: number;
};
}
interface BankProfileResponse {
bank_id: string;
name?: string;
background?: string;
disposition?: {
skepticism: number;
literalism: number;
empathy: number;
};
}
Memory Banks
Create a Bank
const bank = await client.createBank('customer-support-agent', {
name: 'Customer Support Agent',
background: 'This agent handles customer inquiries for an e-commerce platform',
disposition: {
skepticism: 3,
literalism: 2,
empathy: 4
}
});
console.log(`Created bank: ${bank.bank_id}`);
Get Bank Profile
const profile = await client.getBankProfile('my-assistant');
console.log(`Bank name: ${profile.name}`);
List Memories in a Bank
const result = await client.listMemories('my-assistant', {
limit: 100,
offset: 0
});
console.log(`Total memories: ${result.total}`);
result.items.forEach(memory => {
console.log(`- ${memory}`);
});
Retain (Store Memories)
Basic Usage
await client.retain(
'my-assistant',
'User mentioned they work remotely and prefer async communication.'
);
With Options
await client.retain(
'my-assistant',
'Customer reported a bug with the checkout process.',
{
context: 'Support ticket conversation',
timestamp: new Date(),
metadata: {
ticketId: 'TKT-12345',
priority: 'high'
}
}
);
Batch Operations
const items = [
{ content: 'User is based in Pacific timezone' },
{ content: 'User prefers email over phone calls' },
{ content: 'User has been a customer for 3 years' }
];
await client.retainBatch('my-assistant', items);
Async Batch Operations
// For large batches, use async processing
await client.retainBatch('my-assistant', items, {
async: true
});
Recall (Search Memories)
Basic Search
const result = await client.recall(
'my-assistant',
'What communication preferences does the user have?'
);
result.results.forEach(memory => {
console.log(`[${memory.type}] ${memory.text}`);
});
With Options
// Limit results and set search budget
const result = await client.recall(
'my-assistant',
'project deadlines',
{
maxTokens: 4096,
budget: 'mid' // 'low', 'mid', or 'high'
}
);
// Filter by memory type
const observations = await client.recall(
'my-assistant',
'user preferences',
{
types: ['observation']
}
);
Include Entity Information
const result = await client.recall(
'my-assistant',
'Tell me about Alice',
{
includeEntities: true,
maxEntityTokens: 1000
}
);
// Access entities
if (result.entities) {
result.entities.forEach(entity => {
console.log(`Entity: ${entity}`);
});
}
Processing Results
const result = await client.recall('my-assistant', 'user info');
if (result.results.length === 0) {
console.log('No relevant memories found');
} else {
result.results.forEach(memory => {
console.log(`Type: ${memory.type}`);
console.log(`Content: ${memory.text}`);
console.log('---');
});
}
Reflect (Reasoning Over Memories)
Basic Query
const response = await client.reflect(
'my-assistant',
'What should I know about this customer before our call?'
);
console.log(response.text);
With Options
const response = await client.reflect(
'my-assistant',
'What are their main pain points?',
{
context: "We're preparing for a product review meeting",
budget: 'high' // 'low', 'mid', or 'high'
}
);
console.log(response.text);
// Access source memories
response.based_on.forEach(source => {
console.log(`Based on: ${source}`);
});
Token Usage
const response = await client.reflect(
'my-assistant',
'Summarize our relationship'
);
if (response.usage) {
console.log(`Total tokens: ${response.usage.total_tokens}`);
}
Mental Models
Mental models are user-curated pre-computed reflections that stay current as new memories are added. They are generated by running a reflect query and can be automatically refreshed after observation consolidation.
Create a Mental Model
// Creation runs asynchronously via reflect
const result = await client.createMentalModel(
'my-assistant',
'User Profile',
'What do we know about this user?'
);
console.log(`Operation ID: ${result.operation_id}`);
Create with Options
const result = await client.createMentalModel(
'my-assistant',
'Team Directory',
'Who works here and what do they do?',
{
tags: ['team', 'directory'],
maxTokens: 4096,
trigger: { refreshAfterConsolidation: true }
}
);
List Mental Models
const models = await client.listMentalModels('my-assistant');
models.items.forEach(model => {
console.log(`${model.name}: ${model.content?.substring(0, 100)}...`);
});
// Filter by tags
const filtered = await client.listMentalModels('my-assistant', {
tags: ['team']
});
Get a Mental Model
const model = await client.getMentalModel('my-assistant', 'mm_abc123');
console.log(`Name: ${model.name}`);
console.log(`Content: ${model.content}`);
console.log(`Last refreshed: ${model.last_refreshed_at}`);
Refresh a Mental Model
// Re-run the source query to update the content
const result = await client.refreshMentalModel('my-assistant', 'mm_abc123');
console.log(`Refresh operation: ${result.operation_id}`);
Update a Mental Model
const model = await client.updateMentalModel('my-assistant', 'mm_abc123', {
name: 'Updated Profile',
sourceQuery: 'What are the user\'s key preferences?',
trigger: { refreshAfterConsolidation: true }
});
Delete a Mental Model
await client.deleteMentalModel('my-assistant', 'mm_abc123');
Error Handling
import { HindsightClient } from '@vectorize-io/hindsight-client';
const client = new HindsightClient({
baseUrl: 'https://api.hindsight.vectorize.io',
apiKey: 'your-api-key'
});
try {
const result = await client.recall('invalid-bank', 'test');
} catch (error) {
console.error('Error:', error.message);
}
Common HTTP Errors
| Status | Cause | Solution |
|---|---|---|
| 401 | Invalid API key | Check your API key |
| 402 | Insufficient credits | Add credits to your account |
| 404 | Invalid bank_id | Verify the bank exists |
| 400 | Invalid request | Check request parameters |
Framework Integration
Next.js
// app/api/chat/route.ts
import { NextRequest, NextResponse } from 'next/server';
import { HindsightClient } from '@vectorize-io/hindsight-client';
const client = new HindsightClient({
baseUrl: process.env.HINDSIGHT_BASE_URL!,
apiKey: process.env.HINDSIGHT_API_KEY!
});
export async function POST(request: NextRequest) {
const { bankId, message } = await request.json();
// Store the user message
await client.retain(bankId, `User said: ${message}`);
// Get relevant context
const memories = await client.recall(bankId, message, { limit: 5 });
// Generate a response
const response = await client.reflect(
bankId,
`Based on our conversation history, respond to: ${message}`
);
return NextResponse.json({ response: response.text });
}
Express
import express from 'express';
import { HindsightClient } from '@vectorize-io/hindsight-client';
const app = express();
const client = new HindsightClient({
baseUrl: process.env.HINDSIGHT_BASE_URL!,
apiKey: process.env.HINDSIGHT_API_KEY!
});
app.use(express.json());
app.post('/api/memory', async (req, res) => {
const { bankId, content } = req.body;
try {
await client.retain(bankId, content);
res.json({ success: true });
} catch (error) {
res.status(500).json({ error: 'Failed to store memory' });
}
});
app.get('/api/search', async (req, res) => {
const { bankId, query } = req.query as { bankId: string; query: string };
try {
const result = await client.recall(bankId, query);
res.json({ memories: result.results });
} catch (error) {
res.status(500).json({ error: 'Search failed' });
}
});
app.listen(3000);
Best Practices
Singleton Pattern
Create a single client instance:
// lib/hindsight.ts
import { HindsightClient } from '@vectorize-io/hindsight-client';
let client: HindsightClient | null = null;
export function getHindsightClient(): HindsightClient {
if (!client) {
client = new HindsightClient({
baseUrl: process.env.HINDSIGHT_BASE_URL!,
apiKey: process.env.HINDSIGHT_API_KEY!
});
}
return client;
}
Type-Safe Metadata
interface TicketMetadata {
ticketId: string;
priority: 'low' | 'medium' | 'high';
customerId: string;
}
await client.retain(
'my-assistant',
'Customer issue description',
{
metadata: {
ticketId: 'TKT-123',
priority: 'high',
customerId: 'cust_abc'
} satisfies TicketMetadata
}
);
Graceful Degradation
async function getContext(bankId: string, query: string): Promise<string> {
try {
const result = await client.recall(bankId, query, { limit: 3 });
return result.results.map(m => m.text).join('\n');
} catch (error) {
console.error('Failed to get context:', error);
return ''; // Return empty context on failure
}
}
Complete Example
import { HindsightClient } from '@vectorize-io/hindsight-client';
async function main() {
const client = new HindsightClient({
baseUrl: process.env.HINDSIGHT_BASE_URL!,
apiKey: process.env.HINDSIGHT_API_KEY!
});
try {
const bankId = 'demo-assistant';
// Create a memory bank
const bank = await client.createBank(bankId, {
name: 'Demo Assistant',
disposition: {
skepticism: 3,
literalism: 2,
empathy: 4
}
});
console.log(`Using bank: ${bank.bank_id}`);
// Store some memories
const memoriesToStore = [
"User's name is Alice and she is a product manager",
'Alice prefers bullet points over long paragraphs',
'Alice works at TechCorp and manages the mobile team',
"Alice mentioned she's interested in AI automation"
];
for (const content of memoriesToStore) {
await client.retain(bankId, content);
console.log(`Stored: ${content.substring(0, 50)}...`);
}
// Search for relevant memories
const result = await client.recall(
bankId,
'What does Alice do for work?'
);
console.log('\nRelevant memories:');
result.results.forEach(memory => {
console.log(` - ${memory.text}`);
});
// Get an AI-synthesized answer
const response = await client.reflect(
bankId,
'Write a brief introduction for Alice'
);
console.log(`\nAI Response:\n${response.text}`);
} catch (error) {
console.error(`Error: ${error.message}`);
}
}
main();