When Nandakishore Leburu was building LLM applications at LinkedIn, he learned that the models weren't the problem. The security around them was. He's now a Principal Engineer at Walmart, working on AI applications, and he packaged what he'd learned into an open-source tool.

In 2 months, nearly 7000 developers installed it across npm and PyPI.

The package is called llm-trust-guard. It has thirty-one security guards for LLM and agentic AI applications, no dependencies, and runs in under five milliseconds. It maps to OWASP Top 10 for LLMs 2025, OWASP Agentic AI 2026, and MCP Security. And it comes with a README section; most security packages don't have a detailed list of everything it can't catch.

Leburu wrote it that way on purpose.

"As an architect, I write tradeoffs in my designs," he said. "A guardrail tool that doesn't tell you its blind spots is worse than no guardrail at all."

The idea didn't start as a package. Leburu had spent fourteen years building frontend software and the last few years applying those trust patterns to LLM applications at LinkedIn. The principles were the same everywhere, don't trust user input, don't trust what comes back, check every boundary. He'd never put them in a box someone else could use.

When he left LinkedIn and joined Walmart in mid-2025, he started building on his own time. First came a website trust-architecture-ai.com, where architects could upload design documents and get an evidence-backed review of their AI system's trust gaps.

It didn't get much traction.

"I thought I was solving the problem before it started," he said. "Turns out most people don't think about architecture until their project is big enough to fail."

A couple of people pushed him in a different direction. Anurag Aggarwal, an Agentic AI Product Leader, asked on LinkedIn whether the guardrails could be baked directly into code. Ravi Kiran Achalla, a Senior Engineering Manager, suggested something similar. When the website stalled, Leburu started writing the package.

The npm version launched in February 2026. "OWASP is how the web world thinks about security," Leburu said. "I spent fourteen years making sure every input was validated, every boundary was checked. LLM apps need the same thing. Nobody was packaging it that way."

The guards handle prompt injection across 170 patterns in eleven languages, encoding bypass, PII leakage filtering, tool chain validation, and MCP tool shadowing. It plugs into FastAPI, LangChain, and OpenAI. He ported it to Python in March. The Python version picked up nearly a thousand downloads in its first week, faster early traction than the TypeScript original. On the npm side, twenty-two releases in six weeks, most from Leburu hunting for gaps himself, not from user bug reports.

"Every time I think I've covered it, I run another dataset and something gets through," he said.

Which is why the README reads the way it does. Under a section called "What it cannot catch," Leburu lists his own failure rates. Semantically paraphrased attacks: 10% detection. Adversarial ML attacks. Novel zero-day techniques. He publishes numbers that most package authors would bury.

"Regex can't catch the meaning of a prompt," he said. "That's a basic fundamental thing. I tried to see how far I could push it with architectural patterns and guard layering."

The package doesn't try to pretend otherwise. It sits in the orchestration layer between the user and the model, and between the model and the tools it calls, catching known patterns before they do damage. Leburu compares it to a WAF for language models. A Web Application Firewall catches known attacks on web traffic before they reach the server. llm-trust-guard does the same for LLM queries and responses.

For teams that need deeper detection, the package has a pluggable classifier interface where they can add their own ML models alongside the regex guards. He designed it that way from the start, the fast layer and the semantic layer running together.

Protect AI's LLM Guard and NVIDIA's NeMo Guardrails already offer ML-based detection. They catch more. They also need more model hosting, dependencies, latency budgets that not every team has. Leburu's package is lighter. It's the first layer, not the whole stack.

7000+ downloads in two months is not bad for a tool most teams didn't know they needed.

"I'm not pretending this is finished," Leburu said. "The package catches known patterns. There will be things that get through. Security is layers. You just keep adding them."

This article was written in cooperation with Tom White