AI Security Audit

Find Your AI Risk
Before Someone
Else Does

Your team is already using AI tools. The question is not whether risks exist. The question is whether you know where they are.

Scroll to explore
01

Your team is using AI tools without guardrails

ChatGPT, Claude, Gemini. They are already inside your workflows. Sales decks, internal memos, customer data, code. Most teams adopted these tools in weeks. The policies that should surround them have not been written yet.

Silent exposure
02

Sensitive data is leaving your organisation unintentionally

When an employee pastes a client contract into an AI assistant to summarise it, that data goes somewhere. Most people do not think about where. Most companies have no visibility into how often this is happening.

Data leakage
03

Prompt injection is a real attack vector most teams have never heard of

A malicious input can manipulate an AI system into ignoring its instructions and doing something it should not. If your workflows touch AI, this risk exists whether or not you have prepared for it.

Active threat
04

Nobody knows how AI is actually being used internally

Without visibility, you cannot manage risk. Most companies have no audit trail for AI tool usage. No way to know what was shared, what was generated, or what decisions were made based on unvalidated AI outputs.

Zero visibility
01
Discovery

Understand Your Workflows

We start with a focused conversation about how your team actually uses AI tools day to day. No assumptions. I map the real usage patterns before looking for where they break down.

02
Audit

Find the Gaps

Over one to three days I review your workflows, test for common attack vectors including prompt injection, identify data exposure points, and flag unsafe practices. This is hands-on, not a questionnaire.

03
Deliver

Clear Report and Walkthrough

You receive a written report in plain language, not jargon. Then a call where I walk you through every finding, explain the real-world risk, and tell you exactly what to fix and in what order.

I build the
defences from
scratch

Most people offering security audits have studied threats from the outside. I have spent time building the actual systems that defend against them β€” through research, internships in production environments, and hands-on competitive security work. That means I know how these systems fail, because I have had to think about that to build them correctly.

πŸ›‘οΈ

Bharat Electronics Limited

Built AI and ML models for real-time intrusion detection and network anomaly identification in production environments at India's defence electronics PSU.

Defence PSU
πŸ†

Kerala Police Cyberdome

AI team member at HAC'KP 2025, building computer vision and content detection and takedown system deployed for law enforcement operations.

Law Enforcement
βš™οΈ

ThinkSym β€” LLM Security Research

Academic research project building a five-layer adversarial defence pipeline covering prompt injection, jailbreaks, PII extraction, social engineering and neurosymbolic reasoning with formal verification via Z3.

Academic Research

ThinkSym: Five Layers of LLM Defence

An academic research project I built to understand LLM attack surfaces from the inside out. Designing each defence layer required deeply understanding how each attack works β€” which is exactly what informs how I approach an audit.

L1

SmoothLLM Adversarial Smoothing

Randomised input perturbation to neutralise adversarial suffixes and character-level attacks before they reach the model

Active
L2

Fine-tuned ModernBERT + Llama Guard

Dual classifier layer detecting harmful intent, jailbreak patterns, and PII extraction attempts with high precision

Active
L3

Scallop Probabilistic Reasoning

Neurosymbolic reasoning layer applying logical rules over uncertain inputs to catch edge cases classifiers miss

Active
L4

Z3 Formal Verification

Constraint solver enforcing hard logical guarantees over output properties that probabilistic methods cannot reliably enforce

Active
L5

Neo4j Graph Corroboration

Knowledge graph layer cross-referencing outputs against structured context to catch hallucinations and semantic inconsistencies

Active

Prompt Injection

Testing whether your AI integrations can be manipulated into bypassing instructions through malicious inputs

Data Leakage Points

Identifying where sensitive company or customer data is being shared with external AI systems unintentionally

Shadow AI Usage

Mapping unapproved AI tools being used inside your team without visibility from leadership or IT

Output Validation Gaps

Flagging workflows where AI outputs are being trusted and acted on without human verification

PII Exposure

Reviewing whether personally identifiable information is flowing through AI tools in ways that create compliance risk

Social Engineering Vectors

Assessing whether AI-assisted communication tools could be manipulated to deceive employees or customers

Workflow Over-Reliance

Identifying critical decisions that have become dependent on AI outputs without fallback or oversight mechanisms

Practical Safeguards

For every risk found, a concrete fix. Not vague recommendations. Specific actions your team can implement immediately

What You Actually Get

No 80-page reports that nobody reads. No jargon-heavy findings that require a security team to interpret. You get clarity on exactly what is wrong and exactly how to fix it.

  • Written audit report in plain language
  • Walkthrough call covering every finding
  • Prioritised fix list with clear next steps
  • Practical guidelines tailored to your team size and tooling
  • Optional implementation support if you need help putting fixes in place
  • Quickly completed, with timing tailored to your team’s size
audit_scan.py
$ run audit --target workflows Scanning AI tool usage patterns... Mapping data flow endpoints... ⚠ Prompt injection vector found in support bot ⚠ PII detected in 3 external API calls ⚠ Shadow AI usage: 4 unapproved tools Running adversarial test suite... βœ“ Output validation layer intact βœ“ Access controls verified Generating report with fix priority list... βœ“ Audit complete. 7 findings. 7 fixes. $
Bharat Electronics Limited
Network and Cybersecurity Division
Built production AI models for real-time intrusion detection during a traineeship at India's primary defence electronics PSU. CLYR Hawk framework deployed on AWS Lambda with live threat identification.
Defence PSU Β· 2026
Kerala Police Cyberdome
HAC'KP 2025 β€” Winner
AI development for a law enforcement content takedown solution. Computer vision models for age estimation, NSFW classification and content takedown deployed in real operational context.
Law Enforcement Β· Oct 2025
Academic Research
ThinkSym β€” LLM Security Research
Research project exploring a five-layer adversarial defence architecture covering prompt injection, jailbreaks, PII extraction and social engineering. Includes neurosymbolic reasoning via Scallop and formal verification via Z3.
Academic Β· 2025

If your team uses AI,
the audit pays for itself
the first time it catches something

Start with a free 20-minute call. No commitment, no pressure. If what I find is not worth your attention, I will tell you that too. Startups and small teams are welcome!