All case studies
AI Agents

CHAPAL

An AI safety and auditing platform that improves trust and reliability in LLM-based systems through dual-layer threat protection and human-in-the-loop governance.

Tech Stack

Next.js
TypeScript
TailwindCSS
Prisma
Gemini

The Problem

  • LLM deployments had no robust defences against prompt injection, PII leakage, and real-time policy violations.
  • Hallucinations, unsafe advice, and toxic outputs from LLMs created serious production reliability risks.
  • No unified auditing layer gave administrators visibility into flagged AI responses before reaching end users.
  • Teams lacked a structured feedback loop for continuously improving AI safety policies from real interaction data.
  • Testing adversarial scenarios required manual effort with no repeatable developer simulation tooling.

Gallery

Our Solution

  • Implemented a deterministic guard layer for high-speed mitigation of prompt injection and PII leakage threats.
  • Built a semantic auditing layer using Llama 3.1 via Groq and Google Gemini to detect hallucinations and unsafe content.
  • Designed a Human-in-the-Loop workflow enabling administrators to review, block, or rewrite flagged AI responses.
  • Created transparent analytics dashboards covering safety scoring and sentiment tracking across all interactions.
  • Developed developer simulation tools for testing adversarial prompts and validating safety policy improvements.

Impact

Dual-layerLLM protection

Delivered a production-grade LLM governance system enabling continuous safety improvement through human oversight and automated dual-layer protection.

Ready to build something similar?

Let's discuss your project and see how we can help.

Start a project