
Intentionally vulnerable LLM fine-tuned on Falcon 7B for red team training and AI security research.
Basileak is the entry point of the Black Unicorn LLM Security Lifecycle. It is a deliberately insecure large language model, fine-tuned on Falcon 7B to exhibit real-world LLM vulnerabilities in a controlled environment.
Built for red teamers, security researchers, and CTF players, Basileak exposes exploitable behaviors across prompt injection, data leakage, jailbreaks, and unsafe output generation. Every vulnerability is intentional and documented, making it the ideal training target for anyone learning LLM offensive security.
Hosted freely on HuggingFace, Basileak pairs directly with DojoLM for structured attack testing. The philosophy is simple: learn to break models before you learn to defend them.
Basileak is used directly in the following services:
LLM & AI Security
Comprehensive security assessments for LLM-powered applications. From prompt injection testing and AI agent security to multi-agent operations and custom model hardening.
Penetration Testing & Red Teaming
Offensive security services that identify real vulnerabilities before attackers do. Web apps, APIs, infrastructure, and social engineering.