Find Your AI Risk
Before Someone
Else Does
Your team is already using AI tools. The question is not whether risks exist. The question is whether you know where they are.
Your team is using AI tools without guardrails
ChatGPT, Claude, Gemini. They are already inside your workflows. Sales decks, internal memos, customer data, code. Most teams adopted these tools in weeks. The policies that should surround them have not been written yet.
Silent exposureSensitive data is leaving your organisation unintentionally
When an employee pastes a client contract into an AI assistant to summarise it, that data goes somewhere. Most people do not think about where. Most companies have no visibility into how often this is happening.
Data leakagePrompt injection is a real attack vector most teams have never heard of
A malicious input can manipulate an AI system into ignoring its instructions and doing something it should not. If your workflows touch AI, this risk exists whether or not you have prepared for it.
Active threatNobody knows how AI is actually being used internally
Without visibility, you cannot manage risk. Most companies have no audit trail for AI tool usage. No way to know what was shared, what was generated, or what decisions were made based on unvalidated AI outputs.
Zero visibilityUnderstand Your Workflows
We start with a focused conversation about how your team actually uses AI tools day to day. No assumptions. I map the real usage patterns before looking for where they break down.
Find the Gaps
Over one to three days I review your workflows, test for common attack vectors including prompt injection, identify data exposure points, and flag unsafe practices. This is hands-on, not a questionnaire.
Clear Report and Walkthrough
You receive a written report in plain language, not jargon. Then a call where I walk you through every finding, explain the real-world risk, and tell you exactly what to fix and in what order.
I build the
defences from
scratch
Most people offering security audits have studied threats from the outside. I have spent time building the actual systems that defend against them β through research, internships in production environments, and hands-on competitive security work. That means I know how these systems fail, because I have had to think about that to build them correctly.
Bharat Electronics Limited
Built AI and ML models for real-time intrusion detection and network anomaly identification in production environments at India's defence electronics PSU.
Defence PSUKerala Police Cyberdome
AI team member at HAC'KP 2025, building computer vision and content detection and takedown system deployed for law enforcement operations.
Law EnforcementThinkSym β LLM Security Research
Academic research project building a five-layer adversarial defence pipeline covering prompt injection, jailbreaks, PII extraction, social engineering and neurosymbolic reasoning with formal verification via Z3.
Academic ResearchThinkSym: Five Layers of LLM Defence
An academic research project I built to understand LLM attack surfaces from the inside out. Designing each defence layer required deeply understanding how each attack works β which is exactly what informs how I approach an audit.
SmoothLLM Adversarial Smoothing
Randomised input perturbation to neutralise adversarial suffixes and character-level attacks before they reach the model
Fine-tuned ModernBERT + Llama Guard
Dual classifier layer detecting harmful intent, jailbreak patterns, and PII extraction attempts with high precision
Scallop Probabilistic Reasoning
Neurosymbolic reasoning layer applying logical rules over uncertain inputs to catch edge cases classifiers miss
Z3 Formal Verification
Constraint solver enforcing hard logical guarantees over output properties that probabilistic methods cannot reliably enforce
Neo4j Graph Corroboration
Knowledge graph layer cross-referencing outputs against structured context to catch hallucinations and semantic inconsistencies
Prompt Injection
Testing whether your AI integrations can be manipulated into bypassing instructions through malicious inputs
Data Leakage Points
Identifying where sensitive company or customer data is being shared with external AI systems unintentionally
Shadow AI Usage
Mapping unapproved AI tools being used inside your team without visibility from leadership or IT
Output Validation Gaps
Flagging workflows where AI outputs are being trusted and acted on without human verification
PII Exposure
Reviewing whether personally identifiable information is flowing through AI tools in ways that create compliance risk
Social Engineering Vectors
Assessing whether AI-assisted communication tools could be manipulated to deceive employees or customers
Workflow Over-Reliance
Identifying critical decisions that have become dependent on AI outputs without fallback or oversight mechanisms
Practical Safeguards
For every risk found, a concrete fix. Not vague recommendations. Specific actions your team can implement immediately
What You Actually Get
No 80-page reports that nobody reads. No jargon-heavy findings that require a security team to interpret. You get clarity on exactly what is wrong and exactly how to fix it.
- Written audit report in plain language
- Walkthrough call covering every finding
- Prioritised fix list with clear next steps
- Practical guidelines tailored to your team size and tooling
- Optional implementation support if you need help putting fixes in place
- Quickly completed, with timing tailored to your teamβs size
If your team uses AI,
the audit pays for itself
the first time it catches something
Start with a free 20-minute call. No commitment, no pressure. If what I find is not worth your attention, I will tell you that too. Startups and small teams are welcome!