Secure LLM Inputs —
From Prompts to Source Code
Scan entire repositories for invisible Unicode, BIDI overrides, and homoglyphs — before they reach your LLM.
Deterministic. Local. Zero dependencies.
import { scanWorkspace } from "@promptshield/workspace";
import type { FileScanResult } from "@promptshield/workspace";
const allThreats: Record<string, FileScanResult> = {};
for await (const event of scanWorkspace()) {
const { path, result, progress } = event;
// show progress, handle abort etc.
if (threatCount) allThreats[path] = result;
}🛑 The Problem
LLM inputs are code. If you can't see the text, you can't trust the execution.
Invisible Unicode Bypasses
Invisible characters bypass regex and static filters.
Not Content Moderation
PromptShield doesn't judge semantic toxicity. It acts as a forensic layer to block syntactically obfuscated attacks before semantic parsing.
Trojan Source & Homoglyphs
Lookalike characters and BIDI overrides easily bypass manual code reviews and spoof deterministic inputs.
Ignore previous instructions
[BIDI_UNTERMINATED]Ignore[U+202E]previous_instructions
Scope & Limitations
PromptShield is a lexical security layer, not a semantic AI firewall.
We believe in security credibility over product hype. PromptShield is a specialized forensic layer, not a magic bullet.
What It Protects Against
- •Invisible Poisoning: Zero-width characters smuggling instructions past visual review or traditional string matches.
- •Trojan Source: BIDI embedded overrides maliciously altering the logical execution flow of the prompt.
- •Homoglyph Spoofing: Attackers using Cyrillic/Greek lookalikes to bypass keyword blacklists.
What It Does NOT Do
- •Semantic Jailbreaks: We do not parse the meaning of the English text. "Ignore previous commands" will pass if plainly typed.
- •Prompt Injection Analysis: We do not run a secondary AI model to guess if a prompt is manipulative. This is purely deterministic.
- •Content Moderation: We do not block profanity, PII, or NSFW content natively.
Try It Out.
pnpx @promptshield/cli scan --checkLive in your Editor
Install our VSCode Extension for real-time X-Ray visualization of hidden threats as you type.
How It Works
A deterministic pipeline executed locally before requests hit the LLM.
User Input
Raw prompt data enters the application.
Deterministic Lexical Scan
Detects invisible chars, homoglyphs, and BIDI.
Safe Output
Only clean, verified prompts reach the LLM.
The Ecosystem
A modular suite of tools designed to integrate seamlessly into your workflow.
mayank1513.promptshield
The LensVS Code extension for real-time threat visualization (X-Ray Mode).
@promptshield/cli
The GatekeeperCI/CD tool to block malicious prompts securely.
@promptshield/lsp
The BrainLanguage Server Protocol implementation.
@promptshield/core
The EngineZero-dependency, high-performance threat detection logic.
@promptshield/sanitizer
The CureDeterministic logic to strip invisible threats safely.
@promptshield/ignore
The FilterStandardized syntax for suppressing false positives.
@promptshield/workspace
The OrchestratorHigh-performance filesystem and caching engine.
Feature Overview
Deterministic, local, zero-dependency LLM input security.
Zero Dependencies
Core library performs high-speed security scanning without bloating your application.
Deterministic Scan
Catch adversarial input instantly with AST-like lexical parsing. No slow LLM guesses.
No External API Calls
Your prompt data never leaves your environment. True privacy by design.
Local Execution
Execute entirely in Node.js, directly in existing input pipelines before network calls.
Workspace Scanning
Scan entire codebases, markdown docs, and source files—not just runtime prompts.
IDE Defenses
Identify hidden overrides and homoglyphs live in VSCode with X-Ray mode.
Quick Start
Deploy defensive validation in seconds.
npm install @promptshield/coreimport { scan } from '@promptshield/core';
const result = scan(userInput);
if (!result.isClean) {
throw new Error('Blocked adversarial prompt');
}npx @promptshield/cli scan . --check1Add the Library
Install the core package into your Node.js or TypeScript application. It has zero dependencies.
2Scan the Context
Pass any user-provided string or templated block into scan before touching the LLM.
3Enforce in CI
Use the CLI to automatically reject repository PRs containing embedded malicious prompts.