2026 is the year of agentic AI (so data governance will be foundational)
SHARE THIS ARTICLE
The rise of agentic AI, systems capable not just of analysing information but of taking action, marks a turning point for organisations, writes Doug English.
The rise of agentic AI, systems capable not just of analysing information but of taking action, marks a turning point for organisations. What was once a technical or compliance concern has become a leadership issue. In an era where AI can influence decisions autonomously, data governance is no longer optional. It is foundational.
Agentic AI dramatically expands what machines can do on our behalf: synthesising information across systems, drawing conclusions, and acting with speed and scale humans simply can’t match. But that power comes with a new kind of risk. These systems don’t just surface insights; they operationalise them. And that makes the quality, currency, and authorisation of the data they access mission-critical.
At Culture Amp, we work with organisations using AI to help managers make sense of complex people data: feedback, goals, engagement signals. Used well, this can elevate judgement rather than replace it, helping leaders prepare for better conversations, clearer expectations, and fairer decisions. But the effectiveness of these tools depends entirely on the integrity of the data beneath them. AI does not fix weak foundations; it amplifies them.
One of AI’s greatest strengths is its ability to improve discoverability. That strength also redefines risk. In the past, a poorly permissioned document or outdated dataset might remain buried simply because it was hard to find. In an agentic AI world, discoverability is trivial. An AI agent acting on behalf of a user can effortlessly scan folders, systems, and tools that the user has access to, and draw conclusions from everything it finds.
This shifts the risk landscape entirely. The question is no longer, “Can someone stumble across the wrong information?” but, “Could an AI system act on information that was never meant to guide a decision?” When AI can move from insight to action, outdated or sensitive data becomes an operational and reputational liability.
This is why agentic AI forces the governance question. Once AI systems can act, organisations must be certain, not hopeful, that the data those systems rely on is accurate, relevant, and appropriately scoped. That certainty cannot be achieved through technical controls alone.
It is also a cultural issue. Poor data governance erodes trust not just in technology, but in leadership. A single AI-driven misstep can undermine confidence in all AI systems within an organisation. Unlike human error, which we typically treat as individual failure, AI error is perceived as systemic. That raises the bar considerably.
There is, however, an opportunity embedded in this shift. Agentic AI can be used not only to act on data, but to surface governance gaps themselves, flagging when accessible information may be outdated, misaligned, or inappropriate for the context in which it’s being used. In this sense, AI can strengthen governance rather than weaken it, if organisations are intentional.
If data is the raw material of AI, governance is the cultural infrastructure that determines how responsibly it is used. That means governance can no longer sit solely with IT or compliance teams. It must be embedded in leadership thinking about accountability, trust, and performance.
Our research into high-growth companies, including hundreds recognised on the Inc. 5000, consistently shows that accountability is a defining trait of sustainable success. Organisations that perform best over time are clearer on expectations, stronger in feedback, and more disciplined about ownership. The same principle applies to data. When stewardship is treated as shared accountability, organisations move faster and with greater confidence.
This shows up in practical ways: leaders being explicit about what data informs coaching versus evaluation; teams questioning the provenance of insights rather than accepting them blindly; and AI systems designed to support human judgment, not override it. The goal is not automation for its own sake, but better decisions, made with clarity and care.
As AI capabilities continue to evolve, three principles matter most.
First, build trust. Trust is the currency of both culture and technology. AI systems must respect data boundaries, protect privacy, and be transparent about their limitations. Confidence in AI depends as much on ethics as accuracy.
Second, keep control. Human judgment must remain central. Agentic AI can recommend and act, but accountability always rests with people. Decisions must be explainable, reversible, and aligned with organisational intent.
Third, expand value responsibly. Well-governed data enables AI to unlock extraordinary value, from sharper feedback to more effective leadership at scale. But value achieved without trust is fragile. Sustainable performance depends on both.
The organisations that succeed in the age of agentic AI will be those that treat data governance as foundational, not restrictive. Governance is not a brake on innovation; it is what allows innovation to scale safely and with greater speed.
Without strong data governance, there is no trustworthy AI. And without trust, there is no sustainable performance. In that sense, data governance is no longer just an IT concern. It is a leadership imperative, and a cornerstone of long-term organisational success.
Doug English is the CTO at Culture Amp.