AI, Leadership and Engagement: What Changes When Work Itself Is Redefined
Inge Van Belle
February 23, 2026
I was recently invited by Philippe Sauvan for a conversation around my book, Employee Engagement, What Else?, and the broader leadership questions organisations are facing today.
Although the discussion naturally touched on AI, the real focus was not technology in itself. It was leadership. More specifically, what leadership requires when the world of work is changing faster than many organisations are ready for, and when employee engagement can no longer be treated as a separate HR topic.
That is where the conversation becomes far more interesting.
Because the real challenge is not simply that AI is entering the workplace. It is that the context in which people work, contribute and define their value is being redefined.
When AI changes the definition of value
One of the core insights from the conversation is that AI does not simply optimise existing work. In many cases, it changes what is considered valuable work altogether.
Tasks that once defined someone’s contribution can become automated. Activities that structured the workday may disappear or become marginal. At the same time, expectations around output, speed and adaptability increase.
For employees, this creates a fundamental shift. The question is no longer only what needs to be done, but what remains meaningful to contribute.
Without a clear answer, this transition can easily lead to uncertainty or disengagement. Not because people resist AI, but because they struggle to understand their place within it.
Why clarity becomes a leadership responsibility
In theory, the direction may seem obvious. In practice, it rarely is.
Employees are left to interpret what AI means for their role, their performance and their future within the organisation. In the absence of clarity, most people default to what feels familiar or safe, even if it no longer aligns with where the organisation is heading.
This is where leadership becomes critical.
Clarity is not about simplifying the message. It is about translating change into concrete implications. What is expected now? What does good performance look like? Where should people focus their attention?
Without that translation, organisations risk creating movement without direction.
Efficiency without meaning is a hidden risk
Another point that emerged strongly is that AI can remove activity faster than organisations redefine meaning.
When tasks disappear, but no clear narrative replaces them, work can become fragmented. People remain busy, but less connected to a clear sense of contribution.
From an organisational perspective, this is a subtle but important risk. Efficiency may increase, but engagement can decline if employees no longer see how their work fits into a broader context.
This is not a technology issue. It is a leadership issue.
Leadership as the anchor in times of acceleration
In a context where technology evolves rapidly, leadership becomes the main stabilising factor.
Not by slowing down change, but by providing coherence.
Employees do not only need to understand what is changing. They need to understand why it matters and how they are expected to contribute within that new reality.
Leaders who consistently make that connection create direction. Those who do not leave room for ambiguity, which is often filled with speculation and concern.
From adoption to alignment
The conversation reinforced a simple but often overlooked point. The success of AI in organisations will not be determined solely by how quickly it is adopted, but by how well people are aligned around what it changes.
This requires a shift in focus:
From tools to understanding.
From output to meaning.
From implementation to alignment.
These are not technical questions. They are leadership questions.
AI will continue to evolve, and organisations will continue to adopt it.
The real differentiator will not be who moves first, but who manages the human side of that transition with the most clarity and coherence. Because in the end, engagement will depend less on what AI can do, and more on whether people understand what they are there to do.
You can watch the full conversation with Philippe Sauvan here.