\u2190 All insights

The 60-second audit: find out what your team sent to ChatGPT yesterday

Before you buy any tool, run this audit. It takes a minute, costs nothing, and tells you more about your AI exposure than any pitch deck will.

Before we talk about tools, policies, or procurement, run this exercise. It is free. It takes a minute. And in my experience it changes the conversation more than any demo ever does.

The problem with enterprise AI governance is not that nobody cares. It's that most CISOs genuinely do not know what their own organisation is already sending to large language models. Not in broad strokes, in specifics. Which tools. Which departments. Which records. Which regulated categories.

Here is how to find out before tomorrow morning.

Step 1: Query your SSO or identity provider

Most OAuth-authenticated LLM services appear in your Okta, Azure AD, or Google Workspace audit logs as unique applications. Filter your last 30 days of sign-in events for anything containing openai, anthropic, gemini, perplexity, poe, character.ai, or copilot.

You'll usually find two surprises: apps you didn't know existed on your estate, and apps used by teams you didn't expect. The audit is not complete, many LLM users never authenticate via SSO, but it's a free first pass.

Step 2: Check browser history via MDM

If you manage a Chrome fleet via Google Workspace admin or Microsoft Endpoint Manager, pull the last 7 days of domains visited by users in regulated departments (finance, HR, legal, clinical). You are looking for volume to any of the endpoints above, plus internal-looking LLM gateways.

If your MDM doesn't support browser history queries directly, ask the vendor for the 'unmanaged Chrome extensions' report. It often surfaces AI assistants that users have installed themselves.

Step 3: Interview three people at random

Pick one person from finance, one from customer support, and one from marketing. Ask them: 'In the last week, have you pasted anything from a work system into an AI chatbot to help you with a task?' Do not ask which tools, do not ask what data, just yes or no.

The answer is almost always yes. In my last five audits, the ratio has been 11 out of 15.

What the numbers usually reveal

In a mid-size regulated organisation (1,000 to 5,000 people), we consistently find:

  • Between 8 and 23 distinct LLM services in active use, averaging 14
  • Somewhere between 30 and 60 per cent of staff using at least one, weekly
  • Roughly 40 per cent of users who've pasted content they wouldn't email externally without legal review
  • Usually zero audit trail of any of it

What to do about it

Don't start with a policy. Start with visibility. You cannot write meaningful controls for usage you have not measured. Once you know the actual surface area, the conversations with your leadership team and your auditors get dramatically easier, because you stop arguing about the problem and start picking from a solution set.

A browser-resident intercept layer (we built one; there are others) gives you the counterfactual, what your staff were about to send, and what you stopped. That's the exhibit the board is going to ask for when the EU AI Act begins enforcement in earnest.