Unified LLM telemetry
Capture every request across OpenAI, Anthropic, local models, and tooling.
Trace prompts, responses, tokens, latency, and status without rewiring your app.
Quaneuron is a neural observability platform for AI applications. Ingest every request, track every token, and surface issues before your users do with live metrics, traces, and alerts.
// install
// npm install @quaneuron/js
import { withQuaneuron } from "@quaneuron/js";
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_KEY });
const wrapped = withQuaneuron({
client,
projectKey: process.env.QUANEURON_PROJECT_KEY,
environment: "production",
});
const result = await wrapped.chat.completions.create({
model: "gpt-4.1-mini",
messages: [{ role: "user", content: "Summarize this ticket." }],
metadata: {
route: "support_summarizer",
userId: "u_38492",
},
});
// Quaneuron captures:
// - model, tokens, cost
// - latency, status, retries
// - route, user, environment
// - response quality flags