AI Is Already Making Decisions Without You

AI is already talking to AI more than most people realise.

AI making decisions without human oversight

AI is already talking to AI more than most people realise. This isn’t sci-fi. It’s happening inside your organisation today.

Let me give you one example.

A candidate uses AI to fine-tune a job application so it slips past the automated filters. The recruiter then uses AI to scan, score, and shortlist that same application.

On paper, this looks like progress. Faster hiring. Less admin. But neither of those AIs has any idea about the actual human behind the resume.

The system is optimising for patterns, not people. And if leaders don’t intervene now, that becomes the default.

The real risk? Reinforcing yesterday’s thinking at tomorrow’s speed

Here’s the hidden cost of the automation loop:

  • The candidate believes they’re gaming the system better
  • The recruiter believes they’ve made the process more objective

Both are right. Both are missing the point.

People are not patterns. They are contradictions They are quirks They are life stories that don’t fit neatly into a dataset

If we don’t teach AI how to value human nuance - intentions, context, lived experiences - it will default to what’s easiest to measure. And what’s easiest to measure is almost always what we’ve measured before.

Which means bias, baked in and scaled.

Humans still have a role - for now

Today, there’s still a human safety net.

A hiring manager might spot that a career gap reflects resilience, not risk. A customer service lead might hear frustration in a client’s voice even if sentiment analysis says “positive”

Humans notice what the machine can’t measure.

But AI is already moving from suggestion to decision.

When that happens, there’s no more gaming the system. There’s only the system, and the consequences.

This is not an HR issue. It’s a leadership issue.

If you’re a CIO or senior IT leader, this is already on your plate. Whether you know it or not.

Across every function, AI is starting to make the first call:

  • Who gets the loan
  • Which supplier wins the tender
  • What project moves forward

If your AI hasn’t been trained in your values and priorities, it will learn from what’s easiest to consume. And the easiest data rarely reflects long term value.

For example:

  • A bank might favour applicants who look good on paper, not those who repay better
  • A healthcare system might prioritise typical patients, missing urgent edge cases
  • A project tool might favour short-term ROI, ignoring strategic impact

This is where leadership matters most.

Three levers leaders can pull

Fixing this isn’t about buying better AI. It’s about designing better decision-making.

Here are the three levers I see that work:

  1. Vision - Define what “good” looks like in clear, measurable terms
  2. Guardrails - Create principles and constraints to prevent drift or misalignment
  3. Capability - Equip your people to shape how AI thinks, not just how it runs

These are what separate organisations that use AI from those that get used by it.

AI won’t replace people. But it might replace thinking.

The real danger is when people trust the system too much and stop questioning it.

They assume the top-ranked option must be right because it came from a machine.

That’s how bias gets embedded. That’s how organisations drift into decisions no one fully understands or owns.

One team I heard about discovered their AI-driven procurement system had been excluding smaller suppliers. Not by design, but because the data reflected historical choices that favoured large vendors.

It took a year to notice. By then, trust was damaged and relationships were lost.

Don’t just feed AI data. Feed it context.

AI is brilliant at spotting patterns. But people aren’t patterns.

So you need to give your AI more than data. You need to give it context.

That might include:

  • Adding qualitative feedback to go with the numbers
  • Training models on examples that reflect your values, not just your history
  • Reviewing edge cases manually before the AI learns from them

This takes a little more effort upfront. But it avoids baking in blind spots you never intended to scale.

What this looks like in practice

I’ve seen banks redefine what “good” means in their credit scoring. Not just default rates, but customer relationships and community impact.

The result? Higher approval rates for underserved segments without higher default rates.

The AI didn’t get smarter. The leadership got clearer.

I’ve also seen a government agency adjust its grant shortlisting system. It had been favouring past applicants because they had more complete records. After adding context-based scoring for first-time applicants, they made the process more inclusive without compromising quality.

These aren’t tech wins. They’re leadership wins.

The mindset shift that matters

AI isn’t just a tool anymore. It’s a decision-maker.

You would never hire a new executive and throw them into the job with only old reports. You’d onboard them properly. You’d share your strategy, values, and non-negotiables.

AI needs the same approach.

It needs clarity It needs boundaries It needs feedback

And that responsibility sits with leadership.

Act before the system locks you out

Right now, most AI still includes human review.

But that’s changing fast.

Soon, AI will be making more decisions than you can track. And by the time you notice it’s misaligned, the damage will already be done.

The time to shape your AI systems is now.

Once they’re at scale, you’re not steering anymore. You’re reacting.

Lead the machine before it leads you

If you don’t teach the machine what matters about people, it will learn to ignore them.

If you want help aligning AI decisions with your values, strategy, and long-term goals, I run workshops designed for exactly this.

They help leadership teams cut through the noise, define what good looks like, and build guardrails and capability fast.

Not with hype. Not with endless pilots. With clarity, capability, and action.

Pre-Launch Invitation: The AI Leadership Academy

This isn't another tech forum. And it's definitely not LinkedIn.

It's a free, private space for IT leaders who want to lead with clarity and confidence in the age of AI. The focus is on strategy, culture, influence, and impact – not just tools and tech.

We open to the public on September 18. As part of this community, you get early access. Join now to be one of the 'OG' who shape the conversation, and help to define what great leadership looks like in this new era.

No fluff. No egos. Just real talk with real leaders.

Join the AI Leadership Academy on Skool