Quick Start
Get up and running with Grounded Intelligence SDK in 10 minutes.
Note: The package is installed as
lumina-sdk, but we refer to it as the Grounded Intelligence SDK.
Prerequisites
- Node.js 18+
- Write key (
gi_xxxxx) from your Grounded Intelligence dashboard - (Optional) PostHog account for UI analytics
1. Install the SDK
npm install lumina-sdkOptional peer dependencies (for provider integration):
# For PostHog UI analytics
npm install posthog-js2. Initialize the SDK
Initialize once at your application's entry point:
import { Lumina, CaptureTranscript } from 'lumina-sdk'
Lumina.init({
endpoint: 'https://your-ingest-server.com',
writeKey: 'gi_xxxxx', // API key from your Grounded Intelligence dashboard
captureTranscript: CaptureTranscript.Full,
maskFn: (text) => text,
enableToolWrapping: true,
flushIntervalMs: 5000,
maxBatchBytes: 100_000,
uiAnalytics: createDummyProvider(), // See Provider Integration below
})CaptureTranscript Options:
CaptureTranscript.Full: Capture all messages verbatimCaptureTranscript.Masked: ApplymaskFnbefore sendingCaptureTranscript.None: Don't capture transcript content
3. Start a Session and Capture a Turn
// Start a session (once per conversation)
const session = await Lumina.session.start()
// Create a turn (once per user message → assistant response)
const turn = session.turn()
// Wrap your LLM call
const response = await turn.wrapLLM(
async () => {
// Your OpenAI/Anthropic/etc call here
return await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
})
},
{ model: 'gpt-4o', prompt_id: 'greeting_v1' }
)
// Provide transcript (respects captureTranscript setting)
turn.setMessages([
{ role: 'user', content: 'Hello!' },
{ role: 'assistant', content: response.choices[0].message.content || '' },
])
// Finish turn (sends event to ingest)
await turn.finish()4. Identify Users (Optional)
When a user logs in:
await Lumina.identify('user_12345', {
email: 'jane@example.com',
name: 'Jane Doe',
plan: 'pro'
})What happens:
distinctIdchanges fromanon_xyztouser_12345- Identify event sent to server
- Server creates
UserAliasto link anonymous → identified - All past anonymous events now queryable under
user_12345
5. Add Context (Optional)
Record Tool Calls
await turn.recordTool(
'semantic_search',
async () => {
return await pinecone.query({
vector: embedding,
topK: 5
})
},
{
type: 'retrieval',
target: 'pinecone',
version: 'v1'
}
)Add Retrieval Metadata
turn.addRetrieval({
source: 'pinecone',
query: userMessage,
results: searchResults.map(r => ({
id: r.id,
score: r.score,
content: r.metadata.text
}))
})Add Custom Annotations
turn.annotate({
feedback_score: 5,
helpful: true,
tags: ['billing', 'urgent']
})Complete Example
Here's a full example putting it all together:
import { Lumina, CaptureTranscript } from 'lumina-sdk'
import openai from 'openai'
// Initialize once at app startup
Lumina.init({
endpoint: 'https://your-ingest-server.com',
writeKey: 'gi_xxxxx',
captureTranscript: CaptureTranscript.Full,
maskFn: (text) => text,
enableToolWrapping: true,
uiAnalytics: createDummyProvider(),
})
async function handleUserMessage(userMessage: string) {
// Start session
const session = await Lumina.session.start()
// Create turn
const turn = session.turn()
try {
// Wrap LLM call
const response = await turn.wrapLLM(
async () => {
return await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userMessage }]
})
},
{ model: 'gpt-4o', prompt_id: 'chat_v1' }
)
const assistantMessage = response.choices[0].message.content
// Set messages
turn.setMessages([
{ role: 'user', content: userMessage },
{ role: 'assistant', content: assistantMessage }
])
// Finish
await turn.finish()
return assistantMessage
} catch (error) {
// Error is auto-captured by wrapLLM
console.error('LLM call failed:', error)
await turn.finish()
throw error
}
}Dummy Provider (No UI Analytics)
If you don't use a UI analytics provider, use a dummy implementation:
function createDummyProvider() {
return {
name: 'none',
init: () => {},
getSessionId: () => '',
getReplayUrl: () => '',
tagTurn: () => {},
untagTurn: () => {},
startReplay: () => {},
stopReplay: () => {},
captureEvent: () => {},
shutdown: () => {},
}
}
Lumina.init({
// ... other config
uiAnalytics: createDummyProvider(),
})PostHog Integration (Optional)
Connect your AI conversations with session replays to see the complete user journey.
import posthog from 'posthog-js'
import { postHogProvider } from 'lumina-sdk/providers/posthog'
// Initialize PostHog
posthog.init('phc_xxxxx', {
api_host: 'https://us.posthog.com',
session_recording: { recordCrossOriginIframes: true },
})
// Create provider adapter
const uiProvider = postHogProvider({
apiKey: 'phc_xxxxx',
host: 'https://us.posthog.com',
projectId: '12345', // Optional: enables replay URL generation
})
// Initialize SDK with UI provider
Lumina.init({
// ... other config
uiAnalytics: uiProvider,
})
// Use in turns
const session = await Lumina.session.start()
const turn = session.turn()
// Attach UI pointers via annotations
turn.annotate({
ui_session_id: uiProvider.getSessionId(),
ui_replay_url: uiProvider.getReplayUrl(),
})
await turn.finish()Next Steps
- SDK Reference — Complete API documentation
- User Identity — Anonymous to identified tracking
- PII Masking — Protect sensitive data
- Provider Integration — UI and system analytics