Announcement

Persistent Character Memory for AI Agents in 10 Lines of Code.

Introducing official Python, TypeScript, and Java SDKs for HippoDid — the fastest way to give your AI agents memory that survives across sessions, frameworks, and deployments.

Yang Jing, Founder — SameThoughts Inc.

Every AI agent framework has a memory problem. LangChain hasConversationBufferMemory. CrewAI has crew-level memory. OpenAI Agents has tool state. But they all share the same limitation: when the session ends, the memory is gone.

Your agent forgets the user's name. The customer's purchase history vanishes. The decisions your team made in yesterday's planning session don't exist today. Every conversation starts from zero.

Today we're releasing official SDKs that fix this in three languages and ten lines of code.


Install in One Command

Python
pip install hippodid
TypeScript / Node.js
npm install @hippodid/sdk
Java / Spring Boot
dev.hippodid:hippodid-spring-boot-starter:1.2.0

Zero configuration beyond your API key. The Python SDK useshttpx for async support. The TypeScript SDK uses native fetch with zero runtime dependencies — it works everywhere: Node.js, Vercel Edge Functions, Cloudflare Workers. The Java starter auto-configures with Spring Boot 3.3+.


The 10-Line Example

Here's what “persistent memory” looks like in practice. This is a complete, working example — not a simplified demo:

Python
from hippodid import HippoDid
from openai import OpenAI
 
hd = HippoDid(api_key="hd_...")
llm = OpenAI()
 
# Assemble a prompt with profile + relevant memories
ctx = hd.assemble_context(
character_id="customer-uuid",
query="What does this customer need?",
strategy="concierge",
)
 
# The formatted_prompt has everything the LLM needs
resp = llm.chat.completions.create(
model="gpt-4o",
messages=[{"role": "system", "content": ctx.formatted_prompt}],
)

That's it. assemble_context() does three things under the hood:

  1. Fetches the character's profile (system prompt, personality, background)
  2. Searches memories semantically by query, ranked by relevance
  3. Formats everything into a single prompt using your chosen strategy

Five Assembly Strategies

Different use cases need different prompt structures. A customer service agent needs different context framing than a coding assistant. We ship five built-in strategies:

StrategyBest for
defaultGeneral-purpose, balanced context
conversationalChatbots, personality-forward agents
task_focusedCoding assistants, structured workflows
conciergeCustomer service, account management
matchingCompatibility, recommendations

Every Framework, Same Memory

The SDKs work with any AI framework. Here are quick examples for the most popular ones:

LangChain

from hippodid import HippoDid
from langchain_openai import ChatOpenAI
 
hd = HippoDid(api_key="hd_...")
ctx = hd.assemble_context(char_id, query)
chain = ChatOpenAI().invoke(ctx.formatted_prompt)

Vercel AI SDK (TypeScript)

import { HippoDid } from "@hippodid/sdk";
import { streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
 
const hd = new HippoDid({ apiKey: "hd_..." });
const ctx = await hd.assembleContext(charId, query);
const result = streamText({
model: anthropic("claude-sonnet-4-20250514"),
system: ctx.formattedPrompt,
messages,
});

Batch Onboarding: 5,000 Characters from a CSV

Real products don't have one character. They have thousands — one per customer, employee, or entity. The SDKs include batch operations that upload a CSV via multipart form and create characters asynchronously:

job = hd.batch_create_characters(
template_id="support-agent-template",
data=customers_df, # pandas DataFrame or list of dicts
external_id_column="email",
on_conflict="SKIP",
dry_run=True, # preview first!
)
# job.total_rows: 5000, job.status: "ACCEPTED"

Dry-run mode validates your data without creating anything. When you're ready, remove dry_run=True and monitor progress with get_batch_job_status().


Memory Modes: EXTRACTED, VERBATIM, or HYBRID

Not all memory needs AI processing. Sometimes you need exact quotes preserved. Each character can be configured with one of three memory modes:

  • EXTRACTEDAI categorizes and extracts structured memories from raw text. Best for most use cases.
  • VERBATIMStore exact text as-is. Good for compliance, legal, or when exact wording matters.
  • HYBRIDBoth extracted and verbatim. Highest fidelity, most storage.

What's in the Box

All three SDKs cover the full HippoDid API surface:

  • Characters — CRUD, clone, external ID lookup
  • Memories — add (AI-extracted), add direct, search, update, delete
  • Categories — list and manage memory categories
  • Tags — add, replace, remove character tags
  • Templates — character templates and agent config templates
  • Batch — CSV upload, async job tracking, dry run
  • Agent Config — system prompt, model, temperature, tools
  • Context Assembly — 5 strategies, client-side prompt building

Plus automatic retry with exponential backoff for 429/5xx errors, typed error classes, and full TypeScript/Python type hints.


Get Started

Sign up at hippodid.com — the free tier includes 3 characters and semantic search. No credit card required.