AI: Authority v Mandate v Liability v Insurability
- Leo Cullen
- Feb 26
- 3 min read
Artificial intelligence has moved beyond analysis. It now executes. Systems can initiate payments, adjust infrastructure, reallocate capital, trigger communications and make decisions of all kinds that hold real-world consequences. The governance conversation must therefore shift. The central question is no longer whether AI produces accurate outputs. It is whether AI action is structurally bounded.

Four Important Concepts
Four concepts determine whether that structure exists: authority, mandate, liability and insurability. While often discussed separately, in reality, they form a single chain. Break one link and the entire system destabilises.
Authority
Authority is the power to act. In every legal system, authority can be delegated. Boards delegate to management. Account holders delegate to payment providers. Principals delegate to agents. Authority answers a simple question: who is permitted to move? It is binary. Either the power exists or it does not.
In the AI era, authority is extended into systems that can execute autonomously. And that extension creates a structural asymmetry. The system can act, but it cannot bear responsibility. The human or organisation that delegated authority remains legally and economically exposed. This asymmetry is not a flaw; it is simply a design reality. Machines do not hold liability. Humans and legal entities do. This is where mandate becomes decisive.
Mandate
If authority is power, mandate is its boundary. A mandate defines the purpose, scope, thresholds, prohibitions and conditions under which authority may be exercised. Authority without mandate is unbounded discretion. Mandate without authority is inert policy. Only when the two are joined does governance become meaningful.
In human organisations, mandates are often implicit. In AI systems, however, implicit boundaries are insufficient. The mandate must be explicit, enforceable and machine-readable. Otherwise, the system operates on vague discretion rather than bounded autonomy. The difference is structural.
Once a system is authorised to act, every adverse outcome falls into one of three categories. Either the system acted outside its mandate. In that case, a boundary was breached and the failure is operational. Or the system acted inside its mandate, but the mandate itself was poorly designed. In that case, the failure is one of governance. Or the system acted inside a well-designed mandate and the outcome was simply unfavourable. In that case, the result is commercial risk. These distinctions determine where liability attaches.
A simple example illustrates the distinction:
An AI treasury agent reallocates liquidity within its mandate, but market spreads widen unexpectedly. That is commercial risk. If it reallocates beyond its authorised threshold, that is control failure. If the threshold was imprudently designed, that is governance failure.
Liability
Liability does not attach to harm in the abstract. It attaches relative to defined authority and mandate. If an action exceeds its bounds, liability sits with control failure.If the bounds themselves were imprudent, liability sits with design. If the action was valid and the risk materialised anyway, the loss is priced exposure.Without mandate clarity, these categories collapse into an “AI caused harm” narrative where accountability becomes diffuse and unstable. Insurability depends on preventing that collapse.
Insurability
Insurance does not require the elimination of risk; it requires classification. Underwriters need defined exposure, allocable responsibility, comparable failure modes and evidence of control. Without these elements, risk becomes systemic and therefore either unpriceable or prohibitively expensive.
Mandate compliance provides the missing structure. When authority is bounded by enforceable mandate, outcomes become mechanically classifiable. Execution breaches are distinguishable from governance design failures. Adverse results within prudent bounds are distinguishable from negligence. That differentiation transforms uncertainty into structured exposure.
Structured exposure can be priced. Priced exposure can be insured. Insured risk can scale.
Mandate Design
This is why mandate design is emerging as the frontier of AI governance. Earlier phases of AI risk management focused on model bias, safety testing and data integrity. Those remain essential. But once AI systems initiate real-world action, the central governance problem becomes one of bounded authority. What is this system permitted to do? Under what limits? With what evidence? And who bears the consequence if those limits fail?
Authority enables action. Mandate constrains action. Liability allocates consequence. Insurability stabilises the system by converting residual uncertainty into priced probability.
AI will not scale safely because it becomes flawless. It will scale safely when these four elements align in architecture. Authority must be explicitly granted. Mandate must be formally defined and enforceable. Liability must be allocable along clear structural lines. Insurability must be grounded in evidence rather than narrative.
The future of AI will be decided by whether organisations can convert autonomous execution into bounded, provable authority. Where that conversion succeeds, risk becomes layerable and scalable.
Where it fails, liability becomes arguable, insurance withdraws, and innovation and scale stalls.




Comments