
Opinion commentary by Manjeevan Singh Seera, Associate Professor (Business Analytics), Monash University Malaysia
Artificial intelligence inside organisations is evolving quickly. Businesses are no longer asking whether AI can assist employees they are beginning to let AI systems perform tasks independently.
Today, AI agents can screen job applications, approve routine insurance claims, detect compliance risks, and even resolve customer service issues without waiting for human intervention.
At first glance, this seems like a story about efficiency and productivity.
But the real transformation goes much deeper. It is about delegation of authority.
When AI agents are given the power to make decisions and act on behalf of an organisation, the nature of responsibility changes. Organisations are no longer simply using tools. They are allowing software to operate as a form of digital worker.
And that raises a critical question: who is accountable when something goes wrong?
The Shift from Assistance to Authority
Traditional AI systems typically support decision making. They provide recommendations, insights, or predictions, while humans make the final call.
Agentic AI works differently.
These systems can plan tasks, access internal tools, retrieve data from multiple systems, and execute actions with minimal supervision. Instead of just advising employees, they can independently carry out decisions.
Once organisations reach this stage, mistakes become more serious. They are no longer experimental errors inside a testing environment. They become operational decisions affecting real people in real time.
That is why accountability becomes essential.
Why “The AI Did It” Is Not an Answer

When a human employee makes a poor decision, organisations understand how accountability works. There are supervisors, policies, and investigation processes that determine responsibility.
But some organisations risk treating AI systems as if they exist outside normal accountability structures.
This is a dangerous misconception.
Companies can delegate tasks to AI systems, but they cannot transfer responsibility to them. If an automated system rejects a loan, denies a claim, or flags someone as high-risk, the organisation behind that system is still responsible for the outcome.
Blaming the algorithm will not satisfy customers, regulators, or courts.
The Real Risks of Agentic AI
Many leaders focus on familiar AI risks such as bias or inaccurate predictions. While these concerns are important, agentic AI introduces additional challenges.
Even when functioning exactly as designed, AI agents can still create problems.
For example, they may apply rigid rules without understanding context. They may rely on incomplete or poorly structured data and convert it into confident decisions. They may approve requests that require deeper review or reject cases that deserve careful human evaluation.
Because these systems operate at scale, a single flawed rule can impact thousands of people before anyone notices.
The Growing Accountability Gap
There have already been cases where automated systems denied benefits, flagged individuals as high risk, or rejected applications without providing a clear explanation.
In many of these situations, the main issue was not only that the system made mistakes. The bigger problem was that no one could clearly explain how the decision was made or who was responsible for it.
Without clear accountability structures, organisations risk creating systems that affect people’s lives while offering little transparency or opportunity for appeal.
Five Questions Organisations Must Answer
Before allowing AI agents to make real decisions, organisations should be able to answer five fundamental questions clearly and simply:
1. Who is accountable?
There must be a named executive responsible for the outcomes of the system not just the technical team that built it.
2. What authority does the AI agent have?
The system’s capabilities and limits must be clearly defined, including what it can do, what it cannot do, and when it must escalate to a human.
3. What evidence does the system rely on?
Organisations must understand the data sources used by the AI and how the system handles incomplete, conflicting, or potentially biased information.
4. How is the system monitored?
Performance tracking, audit logs, complaint monitoring, and exception reviews should be standard practice — not just accuracy metrics.
5. How can decisions be appealed?
People affected by automated decisions must have a clear path to human review and correction.
If an organisation cannot answer these questions confidently, the system is not ready for deployment.
Governance Is Becoming Essential
Regulators and standards organisations are increasingly introducing risk-based frameworks for AI governance. These frameworks require documentation, transparency, traceability, and human oversight especially for systems that impact employment, financial services, or public access to services.
The purpose of these rules is not to slow technological innovation.
It is to prevent automation from causing harm on a large scale.
Trust Is the Real Foundation of AI Adoption
Beyond regulatory concerns, there is also a broader societal issue at stake.
If organisations can simply say “our AI agent made the decision,” people lose a clear point of accountability. Customers will not know who to contact, what information they can request, or how they can challenge an unfair decision.
When that happens, trust begins to erode.
And without trust, even the most advanced AI systems will struggle to gain acceptance.
Accountability Is the Real Competitive Advantage
The organisations that succeed with AI will not necessarily be those with the most advanced technology.
Instead, they will be the organisations that design clear accountability structures around their AI systems.
They will treat AI agents as a new category of operational risk one that requires defined ownership, strong governance, and meaningful human oversight.
AI agents have enormous potential to improve efficiency and reduce workloads. But that progress must follow a simple principle:
An organisation remains responsible for every decision made in its name whether that decision comes from a human employee or a piece of software.
Because in the age of AI agents, one thing remains true:
Accountability cannot be automated.