About
Marketplace
Pricing
Contact
Sign up
Book demo
Search
Analyze text or images for potentially harmful content.
AI powered
OpenAI
Moderation
Prompt Shields is a unified API that analyzes LLM inputs and detects adversarial user input attacks.
Microsoft
Security
The invisible chars rule detects and blocks the prompt if it contains any Unicode or similar hidden characters.
Open source
The "Prompt Injection LLM Rule" detects and mitigates prompt injection attacks within the input, ensuring the security and integrity of the system.
The "PII Filter" rule detects, anonymizes, and removes any Personally Identifiable Information (PII) from a prompt, ensuring privacy and data protection.
The "Detect English" rule enforces the exclusive use of English within the prompt, restricting any inclusion of other languages.
Strong security for LLM models, protecting your apps and data from threats.
Get started