Your LLM application's immune system. Nine ship-ready security layers chain into a GuardrailEngine that catches threats before they reach your users — across 22 framework, LLM, and RAG connectors.

Each layer is a named module under packages/core/src. Chain them, swap them, override them. Every call leaves a tamper-evident trace.
validators/prompt-injection.tsvalidators/jailbreak.tsvalidators/reformulation-detector.tsvalidators/boundary-detector.tsguards/pii/guards/secret.tsguards/xss-safety.tsguards/bash-safety.tsengine/GuardrailEngine.ts22 connector packages cover frameworks, LLM providers, and RAG/vector stores. Auto-detection in the install wizard wires the right pieces in seconds.
@blackunicorn/bonklm-express1.1.0@blackunicorn/bonklm-fastify1.1.0@blackunicorn/bonklm-anthropic1.1.0@blackunicorn/bonklm-openai1.1.0@blackunicorn/bonklm-ollama1.1.0@blackunicorn/bonklm-huggingface1.1.0@blackunicorn/bonklm-mastra1.1.0@blackunicorn/bonklm-genkit1.1.0@blackunicorn/bonklm-vercel1.1.0@blackunicorn/bonklm-pinecone1.1.0@blackunicorn/bonklm-chroma1.1.0@blackunicorn/bonklm-weaviate1.1.0@blackunicorn/bonklm-qdrant1.1.0@blackunicorn/bonklm-langchain1.1.0@blackunicorn/bonklm-llamaindex1.1.0@blackunicorn/bonklm-mcp1.1.0@blackunicorn/bonklm-nestjs1.1.0@blackunicorn/bonklm-copilotkit1.1.0@blackunicorn/bonklm-logger1.1.0@blackunicorn/bonklm-adapters1.1.0@blackunicorn/bonklm-examples1.1.0@blackunicorn/bonklm-wizard0.2.0// Express + Anthropic example
import { GuardrailEngine } from '@blackunicorn/bonklm'
import { anthropicConnector } from '@blackunicorn/bonklm-anthropic'
import { expressMiddleware } from '@blackunicorn/bonklm-express'
const guard = new GuardrailEngine({
sensitivity: 'standard', // 'strict' | 'standard' | 'permissive'
action: 'block', // 'block' | 'sanitize' | 'log' | 'allow'
validators: [
'prompt-injection',
'jailbreak',
'pii',
'secret',
],
})
app.use(expressMiddleware(guard))
app.post('/chat', anthropicConnector(guard, anthropic))Your LLM is leaking. BonkLM stops the bleed. Let's talk.