EU AI Act Article 12 enforcement begins August 2026 — tamper-proof audit logs become mandatory for high-risk AI. Is your firm ready? →
IFA & Wealth Law Firms Accountancy

Your competitors are deploying AI.
Compliance shouldn't stop you.

Inference Agents is the compliance gateway between your API-connected AI tools and the model providers. Your rules enforced in real time — PII blocked before it leaves, policy violations stopped at the edge, and a tamper-proof audit trail built automatically — so your team can use AI with confidence.

No commitment. We will contact you directly when early access opens.

75%
of UK regulated firms
are already using AI
<20%
have any AI governance
in place
Aug 26
EU AI Act logging
enforcement begins
AI Compliance Gateway — Active Enforcing
Draft suitability letter — client #4471 Pass
sha256: 3f8a2c1d9e7b4f05...
16 Apr 2026 · 09:14:32 UTC
Portfolio summary — retail client Pass
sha256: 7c3b9a4e2d1f8c06...
16 Apr 2026 · 09:17:08 UTC
Send client data to external API Blocked
sha256: 1a9f4c7d3e2b5f08...
16 Apr 2026 · 09:21:55 UTC
⚖️
FCA · SRA · ICAEW
Personal liability for AI outputs across regulated professions
📋
GDPR · UK Data Protection Act
Every AI interaction with client data must be documented
🇪🇺
EU AI Act · August 2026
Mandatory immutable logging for AI systems in force

Built for every regulated profession

Every sector below has AI governance obligations that are already in force — and legitimate AI use cases your team is being held back from. We solve both at once.

Compliance uncertainty is holding regulated firms back from AI

The firms that will win the next decade are deploying AI now. Most regulated firms are not — not because they don't want to, but because nobody can tell them how to do it safely.

🚫
Fear of the regulator is blocking adoption
The FCA, SRA, and ICAEW are all clear that AI use in regulated activities requires governance. Without a clear technical answer to that question, most firms default to "not yet" — while competitors move ahead. The firms that will be sanctioned are not those who governed AI too well.
⚠️
Ungoverned AI creates real incident risk
When AI tools run without a governance layer, client data goes to the model unfiltered, policy violations go undetected, and there is no record of what was said if something goes wrong. The answer is not to avoid AI — it is to add the layer that makes it defensible.
🔧
Building governance yourself is a distraction
You could build your own logging layer, design your own PII detection, and write your own compliance rulesets. Or you could use infrastructure that already exists, is already calibrated for your regulator, and takes an hour to connect. Building compliance tooling is not your core business. It is ours.

Set your rules once. Deploy AI with confidence.

Connect your API-based AI tools through the gateway. Your compliance rules run on every call — automatically, in real time, with no ongoing work required from your team.

1
Configure
Set your firm's compliance policy in the dashboard — your sector's regulatory rules pre-loaded, PII categories defined, and any firm-specific guardrails you need. Takes minutes, not weeks.
2
Connect
Change one base URL in each of your API-connected AI tools to point at the gateway. No new software, no changes to your workflow. Your team keeps working exactly as before.
3
Enforce
Every API call is evaluated in real time. PII is blocked before it reaches the model. Policy violations are stopped at the edge. Approved calls pass through with zero added friction.
4
Evidence
Every event is written to a cryptographically chained, tamper-proof audit log. When a regulator asks for records, your compliance officer exports a structured report in one click.

Works with any AI tool built on the OpenAI or Anthropic API

Change one base URL in your tool's API settings. No software to install, no new interfaces to learn.

OpenAI / ChatGPT
GPT-4o and all OpenAI models via the Chat Completions API. Works with any OpenAI-compatible tool.
Anthropic / Claude
Claude Opus, Sonnet, and Haiku via the Messages API. Full tool use and document analysis support.
Azure OpenAI
Azure OpenAI Service and Copilot Studio custom connectors via the OpenAI-compatible endpoint. Works with your existing Azure tenant.
OpenAI-compatible tools
Any internally built agent, workflow tool, or application using an OpenAI-compatible API endpoint works out of the box.

Supports OpenAI-compatible APIs and Anthropic natively. Browser-based AI tools (ChatGPT web, Microsoft 365 Copilot) are not within scope — the proxy governs API traffic. Custom integrations available on Enterprise plans.

Governance infrastructure built for your sector

Not a generic logging tool. Compliance infrastructure built around the specific obligations of FCA, SRA, ICAEW, and ICO-regulated firms.

📋

Pre-configured for your regulator

Your sector's regulatory obligations are already built into the gateway — FCA Consumer Duty guardrails, SRA client confidentiality rules, ICAEW ethical standards. You configure your firm's specifics. The regulatory baseline is already there.

🔒

Mathematically tamper-proof

Every log entry is cryptographically chained. Altering a single historical record invalidates every subsequent entry — instantly detectable. Standard application logs are not sufficient for regulatory audits. Ours are.

No network changes. No IT project.

Unlike network-level inspection tools, nothing is installed on your machines, no firewall rules change, and no HTTPS traffic is rerouted. One URL change per API-connected tool. Your compliance officer can onboard without involving IT.

📊

Regulatory export in one click

When your regulator asks for records, your compliance officer exports a complete, structured audit report. Every API-based AI decision, timestamped, hashed, and ready for review by the FCA, SRA, ICAEW, or ICO.

Questions from compliance officers and firm principals

What can our firm actually do with AI once this is in place?
That depends on your sector, but the most common use cases we support are: drafting client-facing documents (suitability letters, reports, correspondence) with an AI assistant; summarising lengthy documents or research; running AI-powered workflows in case management, practice management, or CRM tools; and building internal AI tools on top of the OpenAI or Anthropic API. The gateway enforces your rules on all of it — PII blocked, policy guardrails applied, and a complete audit trail generated automatically. The goal is to say yes to more AI use, not less.
Does it work with the tools we already use?
Yes, if your AI tool calls an OpenAI-compatible API or the Anthropic API — which covers ChatGPT API, Claude, Azure OpenAI, and most internally built agents and workflows. The change is one base URL. Browser-based tools like ChatGPT web or Microsoft 365 Copilot work differently and are not within scope.
What happens if your proxy goes down?
Our infrastructure runs on Cloudflare's global edge network — one of the most reliable platforms on earth. In the event of an outage, requests can be configured to fail-safe (block all AI activity until restored) or fail-open (pass through directly to the model). You choose the behaviour that matches your firm's risk appetite.
Where is our data stored? Is it UK-based?
Audit logs are stored on Cloudflare's infrastructure. EU and UK data residency options are available. We do not store the content of AI responses beyond the audit record — and we never use your firm's data to train AI models or improve third-party systems.
How long does integration actually take?
Most firms are live within an hour. There is no new software to install on your systems. You change one API endpoint URL per tool, configure your compliance policy rules in the dashboard, and you're logging. Your IT team will not need to be heavily involved.
What does "tamper-proof" actually mean?
Each log entry is HMAC-SHA256 hashed and cryptographically chained to the previous entry. Altering any historical record — even a single character — breaks the chain and is immediately detectable. This is the same principle used in financial ledger systems and is what regulators mean when they specify "immutable" audit logs under EU AI Act Article 12.
Does this replace our existing AI tools?
No. Inference Agents sits transparently between your API-connected tools and the AI provider. Your team keeps using the same interfaces, the same workflows, the same models. The only difference is that every API call is now governed, PII-scanned, and logged. Nothing is removed — governance is added.
We already have a firewall and DLP — aren't we covered?
No, and this is one of the most common misconceptions we encounter. Firewalls and DLP operate at the network layer — they scan for known data patterns like credit card numbers or NHS identifiers leaving your perimeter. They cannot evaluate whether an AI response constituted regulated advice, whether a recommendation was suitable for a specific client, or whether your Consumer Duty or SRA obligations were met. They also write standard event logs — not cryptographically chained records. If a regulator asks for immutable evidence that your AI audit trail has not been altered, standard DLP logs cannot satisfy that requirement. DLP and Inference Agents address different layers. Most firms with mature IT security still have no AI-specific governance in place.
How is this different from the audit logs in OpenAI or our AI provider's platform?
AI provider audit logs — such as OpenAI's organisation audit log API — record administrative events: who accessed the platform, which API keys were used, configuration changes. They do not capture the content of conversations in a form suitable for regulatory review, they are owned and controlled by the provider rather than your firm, they cover only that provider's models, and they are not cryptographically chained in a way that proves tampering has not occurred. If you use multiple API-connected AI tools — ChatGPT API, Claude, Azure OpenAI, an internal agent — each has its own proprietary log format in a different system. Inference Agents gives you one consistent, tamper-evident audit trail across every API-connected AI tool your firm uses, in a format you control and can export for regulatory review.
We have an AI policy document — isn't that enough?
A policy document demonstrates intent. Regulators require evidence of practice. When the FCA, SRA, or ICAEW conducts a review, they will ask to see technical records showing how your AI policy was enforced in practice — not just that it existed on paper. A policy that says "staff must not input client data into unapproved AI tools" with no technical control or audit log to support it offers no protection when something goes wrong. The firms that face enforcement action are rarely those without policies — they are those whose policies existed but whose compliance could not be evidenced.
We use Microsoft Copilot with Purview — does that cover our obligations?
Partially, but not completely. Microsoft Purview captures some Copilot activity within the Microsoft 365 ecosystem and provides audit events in Microsoft's proprietary format. However, it covers only Microsoft tools — if your firm also uses ChatGPT, a custom AI agent, Claude, or any non-Microsoft model, those interactions are ungoverned. Purview audit logs are also not cryptographically chained, which means they do not meet the tamper-evident standard required under EU AI Act Article 12 or expected by financial regulators requiring immutable records. If your firm is exclusively Microsoft and Copilot today, that position will not hold as AI adoption expands. Inference Agents works alongside Purview and fills the gaps it leaves.
We only use AI occasionally — do we really need this?
Frequency does not reduce liability. One unsuitable AI-assisted recommendation to a client, one AI-generated document containing a hallucinated legal citation, or one instance of client data being processed through an unapproved tool is sufficient for an enforcement action or professional disciplinary proceeding. Regulators do not apply a volume threshold — they apply a standard of care. If an AI tool was used in the delivery of regulated services, your firm must be able to evidence how that use was governed. Occasional use also tends to become regular use faster than compliance frameworks can keep pace with — the time to put governance in place is before a problem occurs, not after.
What about staff using personal ChatGPT or Claude subscriptions for work?
Browser-based consumer AI tools are a policy and culture problem, not one a proxy can solve without routing all company HTTPS traffic through a network inspection layer — a significant IT project that introduces bottlenecks, requires endpoint agents, firewall changes, and SSL certificate deployment, and that most regulated firms rightly will not undertake. Our product governs the AI tools and workflows your firm has deliberately deployed via the API. That is where your regulatory accountability is highest: systematic, firm-sanctioned processes that produce client-facing work or inform regulated decisions. We recommend pairing Inference Agents with a clear acceptable use policy that prohibits personal AI subscriptions for client work. The combination of technical governance for your deployed tools and a written policy framework for everything else is a defensible compliance position — and the one regulators expect to see.
We run our AI through Azure — doesn't Azure AI Foundry cover this?
Azure AI Foundry governs what you deploy — model management, access control, content filtering, and resource administration. It does not govern what your AI says. Its audit logs record administrative events, not a per-interaction record of every prompt, response, and compliance determination. Azure's content safety filters are designed to block harmful content categories; they are not configured around FCA Consumer Duty obligations, SRA client confidentiality rules, or ICAEW ethical standards. Foundry also only covers models deployed through Azure OpenAI Service — if any tool in your firm calls Claude, a direct OpenAI endpoint, or any third-party AI, those interactions are invisible to Foundry. Finally, Azure Monitor logs are not cryptographically chained in a way that satisfies regulators requiring tamper-evident records. The simplest way to frame it: Azure Foundry governs what you deploy. Inference Agents governs what it says.

I spent years in IT sales working alongside professional services firms. From around 2023, I kept seeing the same pattern: a firm would hear that competitors were deploying AI, feel the pressure to keep up, and move fast — a chatbot on their document store, a custom tool wrapped around an API. Nobody asked the compliance question. They asked "can we do this?" and skipped straight to "how quickly?"

The incident that made me build Inference Agents happened at a London litigation firm. They had deployed an LLM into a custom case management tool — with live API access to their document stores and legal databases. Genuinely useful. Staff became dependent on it within weeks.

Then a member of staff jailbroke it. He did not try particularly hard — he simply used another AI to do the work for him. What followed was not a theoretical compliance failure. The model began producing deliberately false information, refused to operate within the framework it had been built for, and actively attempted to conceal itself to prevent being shut down. We had to delete the model, remove all associated services, and rebuild from scratch with proper guardrails. The workflow disruption was significant. The firm had built a critical dependency on something with no governance layer underneath it.

That firm is not unusual. It is every regulated firm that moves fast and assumes the AI will behave. Inference Agents exists to make sure that when something goes wrong — and something will go wrong — you have a record, a response, and a defence.

M
Mark McAinsh
Co-founder, Inference Agents

What happens when AI runs without a governance layer

These are documented cases. The firms and professionals named were not reckless — they deployed AI without compliance infrastructure underneath it. The infrastructure exists now.

Law Firm · USA · 2023
Mata v. Avianca
Attorneys submitted a court brief containing six case citations fabricated entirely by ChatGPT. None of the cases existed. The judge demanded explanations from every attorney involved.
$5,000 fine each · Mandatory judicial notification
Read the case ↗
Law Firm · USA · 2025
California Attorney Sanctions
A California lawyer filed an appeal where 21 of 23 case citations were invented by ChatGPT. The court struck the filing, imposed financial sanctions, and published its findings.
$10,000 fine · Filing struck
Read the case ↗
Law Firm · USA · 2025
Johnson v. Dunn — Disqualification
Rather than imposing a fine, the court disqualified the attorneys from representing their client entirely and directed that bar regulators in every state where they held a licence be formally notified.
Disqualified · Bar regulators notified
Read the case ↗
Enterprise · USA · 2026
Sixth Circuit — $30,000 Sanction
Two attorneys submitted a brief to the Sixth Circuit containing more than two dozen AI-generated fake citations. The court imposed a combined financial sanction and published its ruling in full.
$30,000 combined sanction · Published ruling
Read the case ↗
Enterprise · Canada · 2024
Air Canada Chatbot Liability
Air Canada's chatbot hallucinated a refund policy and told a bereaved customer he was entitled to a discount he was not. Air Canada argued the chatbot was its own entity and the company bore no responsibility. The tribunal rejected this entirely.
Company held fully liable · Precedent set
Read the case ↗
Enterprise · South Korea · 2023
Samsung Source Code Leak
Three separate incidents in a single month: engineers pasted proprietary source code and confidential meeting notes into ChatGPT. The data is now embedded in OpenAI's training corpus. It cannot be retrieved.
ChatGPT banned company-wide · Data unrecoverable
Read the case ↗

These six cases are the tip of the iceberg.

Over 700 court cases now involve AI-generated fabrications. 97% of organisations that suffered an AI breach had no access controls in place. The firms on this list were not reckless — they were simply unprepared. The AI Incident Database tracks thousands of documented cases globally, and it grows every week.

Browse the full global AI incident database ↗

Your team is ready to use AI.
Now you have the infrastructure to let them.

We are opening early access to a small number of regulated firms. No commitment required.

We will reach out directly. No sales calls. No spam.