Blind Spots

6 readiness markers that predict AI success better than your usage

Mark hit a huge milestone. His company rolled out AI tools across every function in under six months. Licenses deployed. Pilot programmes launched. Training modules completed. Usage dashboards climbing week over week.

At the quarterly review, he stood in front of the board and said his people were enthusiastic and bought in. He believed it.

Then the staff survey came back, and he found out only 31% of his employees described themselves as engaged with the strategy. Fewer than that said they understood it. Most said they felt behind, anxious, and unsure what "good" looked like in their role anymore.

While the dashboard said healthy, the workforce said something very different.

A few months later, we're thankfully seeing those numbers move in the right direction.

This is the reason we ask leaders to run a readiness diagnostic before investing another dollar in AI tools. Because it is entirely possible to have a glowing dashboard and a broken rollout. You can look like a leader in AI adoption and still be organizationally unready. Your usage stats tell you one story. Your people tell you the real one.

Today, I want to walk you through the 6 readiness markers we track with our clients. These markers predict whether an AI strategy will deliver real value or stall, often years before the ROI numbers catch up.

Most of these can be surfaced by asking the right questions of the right people. The hard part is knowing what to ask and having the honesty to act on the answers.

You ready? Let's go 🚀

1. Narrative & Trust

This measures whether the story leadership tells about AI is building confidence or anxiety. It catches the emotional temperature of a rollout long before engagement scores do.

Research across the global workforce shows that 47% of employees worry about losing their jobs to AI in the next five years. You can spot this early: in the gap between what people say in meetings and what they say to each other.

How to fix it:

  • Name the fear out loud rather than glossing over it. Silence reads as confirmation.

  • Be specific about what AI will and will not replace in each role. Vague reassurance reassures no one.

  • Commit publicly to how the gains will be shared - reclaimed time, flexibility, learning - not just cost savings.

  • Separate the message from the messenger: have managers, not just executives, carry the narrative inside their teams.

2. Process & Friction

This marker reveals whether you've fixed the structural blockers before piling new tools on top. AI does not fix broken workflows. It accelerates them, including the broken parts.

If meetings, approvals, and handoffs are already clogged, AI will clog them faster. Knowledge workers already switch tasks over 300 times a day, with interruptions arriving every two minutes (Microsoft Work Trend Index). Adding tools to that environment multiplies the chaos.

How to fix it:

  • Run a subtraction audit before the next tool rollout: what can be automated, simplified, reduced, or stopped entirely?

  • Remove one recurring meeting per team before adding one new AI workflow.

  • Map the approval chains that slow real work down. Fix the worst one first.

  • Treat friction removal as a prerequisite to capability building, not a parallel track.

3. Governance & Decisions

This is the marker most leaders assume is handled and most employees say is not. It tells you whether people actually know what is allowed, what is not, and who decides.

In the absence of clear governance, two things happen: cautious people freeze, and bold people build shadow workflows that nobody can see or support. Both are expensive. Both erode trust.

How to fix it:

  • Publish short, usable AI guidelines. One page beats a hundred.

  • Name the decision owners for each category of use: client data, code, external communications, sensitive judgment calls.

  • Review policy quarterly. Annual policy cycles cannot keep pace with weekly AI capability leaps.

  • Create a safe, visible channel for people to ask "can I do this?" without fear of being flagged as non-compliant.

4. Capacity & Time

This is the marker that most consistently predicts failure, and the one most often overlooked. It tells you whether there is protected time to learn and experiment, or whether AI is being piled on top of everything else.

Research across sectors points to the same finding: lack of time is the number one barrier to AI experimentation and learning. Organizations that offer protected time for AI development are significantly more likely to report successful adoption.

How to fix it:

  • Block protected learning time on the calendar. Visibly. Senior leaders first.

  • Pilot structured AI sprints or half-day "AI internships" for individuals and teams.

  • Fund the time from existing efficiency gains, not from evenings and weekends.

  • Measure time invested in learning, not just tool usage. Capability is built in hours, not logins.

5. Skills & Confidence

This marker separates awareness from ability. There is a wide gulf between knowing a tool exists and being confident using it for real work under real pressure.

Most organizations measure the first. Only the second produces value. Workers who have received structured support are significantly more likely to report positive outcomes from AI, and yet fewer than one in five report receiving that support.

How to fix it:

  • Move from broadcast training to practice-based development. Small groups, real problems, real stakes.

  • Pair every new tool rollout with a structured reflection ritual: what worked, what didn't, what we'll try next week.

  • Let people bring their own work to the training, not hypothetical case studies.

  • Track confidence, not completion. A finished module means nothing if nobody uses the tool on Monday.

6. Role Redesign & Agency

This is the marker most people have never heard measured, and it might be the most important for long-term adoption.

When role change is done to people, you get resistance. When it is done with them, you get reinvention. Research on change management consistently shows that perceived agency is one of the strongest predictors of whether a workforce will embrace or reject a transformation.

How to fix it:

  • Invite employees into the redesign itself. Ask them what they would automate, augment, and keep fully human in their own roles.

  • Treat their answers as data, not decoration. They are almost always more accurate than anything a consultant produces on their behalf.

  • Make rebundling explicit: combine high-value human tasks with AI-enabled efficiency, and let people see the shape of their future role.

  • Give teams a say in how they measure success post-redesign. Agency over metrics matters as much as agency over tasks.

What To Do Next

Take this list into your next leadership meeting and ask honestly, for each marker: what do we actually know, and how do we know it?

If the answer for most of them is "we assume," that is the answer.

Start with the three markers where your confidence is lowest. Most AI strategies do not fail because the technology did not work. They fail because the organization kept watching the wrong dashboard, and the deeper indicators were already flashing red while the surface numbers still looked fine.

It is not enough to look ready on paper. These markers show you how ready you actually are in ways a dashboard cannot.

P.S. Mark did not move those numbers by reading a newsletter. He moved them because someone built his leadership team a specific plan based on the real readiness of the organization and held them to it.

That is what our team does for leaders who are done guessing. We look at your workforce data, your workflows, and your strategy, and we build a system around it.

No vanity metrics. No bolt-on training. Just a plan that works for how your people actually work.

If you are ready to get your AI rollout dialled in, click here to explore our approach.

💬 Every revolution starts with a conversation.

What's yours?

Next
Next

Raw Mind