
Node.js security library with 8 security layers, 20+ connector packages, and 3,700+ tests for protecting LLM applications from prompt injection to data leakage.
BonkLM is your LLM application's immune system. It is an open-source Node.js/TypeScript security library providing 8 purpose-built security layers that catch threats before they reach your users.
The library covers the full spectrum of LLM risks across 8 layers: prompt injection detection with 35+ pattern categories, jailbreak detection across 44 patterns and 10 categories, reformulation detection for encoding tricks and context overload, PII leakage prevention (SSN, credit cards, passports, EU national IDs), secret exposure detection for 30+ credential types, XSS vector filtering, bash command safety, and real-time streaming validation.
BonkLM is framework-agnostic, provider-agnostic, and platform-agnostic. It ships with 20+ connector packages: framework middleware for Express, Fastify, NestJS, and OpenClaw; AI SDK adapters for OpenAI, Anthropic, Vercel AI, and MCP; LLM framework connectors for LangChain and Ollama; RAG and vector store integrations for LlamaIndex, Pinecone, ChromaDB, Weaviate, Qdrant, and HuggingFace; plus emerging framework support for Mastra, Google Genkit, and CopilotKit.
The GuardrailEngine orchestrator chains multiple validators with short-circuit evaluation, configurable sensitivity levels (strict/standard/permissive), and 4 action modes (block/sanitize/log/allow). An interactive CLI wizard (`npx @blackunicorn/bonklm`) auto-detects your stack and generates the right configuration in seconds.
BonkLM is used directly in the following services:
LLM & AI Security
Comprehensive security assessments for LLM-powered applications. From prompt injection testing and AI agent security to multi-agent operations and custom model hardening.
EU AI & Cyber Compliance
Navigate the EU regulatory landscape with expert guidance on AI Act, DORA, NIS2, CRA, and CSA compliance.