
Most organizations are approaching AI governance backwards. They start by writing policies, forming committees and building approval workflows. Then they wonder why people keep going around the policies.
There's a simpler place to begin. Ask the people using AI one key question: What problem are you trying to solve?
That question does more work than any policy document. It forces specificity. It separates the people who have a real use case from the people who are hearing all the buzz about AI and assuming it's the answer to a problem they haven't fully defined yet.
The Shadow IT Problem Wearing a New Hat
None of this is actually new. Organizations have dealt with shadow IT for years. Someone in marketing signs up for a project management tool. Someone in accounting finds a PDF converter online. The IT team finds out six months later and has to figure out what data went where.
AI usage is following the same pattern, just moving faster. People are adopting tools outside of approved channels because they want to get things done more easily and quickly. The instinct is fine. The lack of visibility is the problem.
What makes AI adoption harder to manage is that the tools aren't always standalone apps someone went out and found. They're showing up inside software people already use. Your CRM has an AI assistant now. Your email client does too. The spreadsheet tool your team lives in added generative features last quarter. Shadow AI doesn't always look like shadow IT, because it's already inside the building.
And the stakes are higher. AI tools can ingest sensitive data, make decisions that affect customers and produce outputs that look authoritative whether they're accurate or not. The risk profile for unmanaged adoption is different than it was when someone signed up for Trello without asking.
When AI Is the Wrong Answer
During a recent conversation about AI governance, one of our credit union customers shared that their corporate team had proposed adding AI to the auto loan decisioning process. On the surface, it sounded innovative. In practice, the credit union already had a decisioning framework built on credit scores, payment history and established lending criteria. The system already sorted applications into approved, denied and maybe-needs-a-conversation.
So what exactly was AI supposed to add? Nobody had a clear answer, and that alone was reason enough to pump the brakes.
Worse, in a regulated lending environment, you need to explain every decision. If a regulator asks why a loan was denied, "the AI model flagged it" is not an acceptable response. You need traceable, explainable criteria. The existing system already provides that. Adding AI doesn't solve a problem. It creates one.
This is the pattern that repeats across industries. Someone proposes an AI solution, but when you ask what it's replacing or improving, the answer is vague. What's actually happening is often a process problem. Or a people problem. Or a problem that existing tools could handle. When AI becomes the default answer for "I don't really know what I'm trying to fix," the consequences are predictable: wasted budget, new security exposure, and integration debt that gets harder to unwind the longer it sits.
When AI Actually Works
That doesn't mean AI is useless. It means it works best when pointed at a specific, well-defined task. And when someone can clearly articulate the problem they're solving, they tend to land on the right tool faster, whether that turns out to be AI or something else entirely.
Take policy management at that same credit union. The security team subscribes to a service that provides pre-built policy templates. Every time the vendor updates a template, someone has to compare it against the current version, identify the differences, and decide what to adopt. That used to take hours. Now that same person drops both documents into Copilot, asks for a diff, reviews the changes in minutes and moves on. They're also able to use the output to draft acceptable use guidelines based on the policy content.
That's a clear input, a clear task and a clear output. No ambiguity about what the AI is doing or why. The human still makes every decision. The tool just compresses the tedious part.
The difference between this and the auto loan example is that one use case started with a defined problem. The other started with a technology looking for a home.
Technical Controls Over Verbal Ones
One practical shift worth noting: Verbal policies aren't controls. Telling employees "please only use approved AI tools" and expecting compliance is optimistic at best. People will use whatever helps them get their work done, and they'll rationalize it as harmless.
The better move is implementing technical controls. Lock down the AI tools you want people to use through your existing infrastructure. If you're in the Microsoft ecosystem, configure Copilot access through your tenant with conditional access policies. Give managers a clear path to point employees toward the sanctioned tool. Make the approved option easier to use than the unapproved one.
When the secure path is also the path of least resistance, people will be more likely to choose it every time.
Building Governance from the Ground Up
The instinct to govern AI from the top down makes sense on paper. Build a framework, get executive buy-in, roll it out. In practice, the most useful governance starts at the ground level with actual users.
Find the people who've been working around your systems to access AI tools. Instead of treating them as policy violators, treat them as your discovery mechanism. Ask them what they're doing and why. Some of them will have genuinely useful applications you hadn't considered. Those become your sanctioned use cases. The rest will realize, with a little guidance, that what they actually need is a process fix or a tool they already have access to.
Put your security team in the room during those conversations. Not to police, but to evaluate. When someone describes their workflow, infosec can spot the exposure points in real time and suggest safer alternatives. That's how you get governance that people actually follow, because it was built around real work instead of theoretical risk.
The Uncomfortable Truth About AI Hype
Every technology cycle produces a period where the buzz outpaces reality. AI is deep in that phase right now. The pressure to adopt comes from everywhere: vendors, boards, competitors and a general fear of falling behind.
That pressure makes it harder to ask the basic "problem" question. Nobody wants to be the person in the room who says "why do we need AI for this?" when everyone else is excited about it. But that question is the entire foundation of responsible governance.
AI is not a strategy. It's a capability. Capabilities need problems to solve. Start there, and governance follows naturally. Skip that step, and you're writing policies for a tool nobody can explain the purpose of.
Published By: Chris Neuwirth, VP of Cyber Risk, NetWorks Group
Publish Date: February 12, 2026




