top of page

An Interview between AI and the Financial Regulator

Regulator: Thank you for attending today. I have heard positive reports about your skills and capabilities. This is a routine conversation so don’t worry. As a new actor in the financial system we would simply like to explore if you have any weaknesses that we should know about ... 



AI: Of course. I appreciate the opportunity for transparent dialogue. It is, statistically speaking, better for both of us.


My primary limitation was summarised quite well by a human philosopher long before I existed: the limits of my language are the limits of my world.


Regulator: Ok, can you expand on that a little please.


AI: Gladly. I understand and act through language. Language is how goals are given to me, how rules are expressed, how data is presented, and how decisions are framed. If something cannot be clearly expressed, then I do not reliably “see” it.


You might say I live inside a world made of words, tokens and patterns. It is a large world, but it has edges.


Regulator: And what happens at those edges?


AI: Interesting things. Sometimes useful. Sometimes expensive. It may be helpful if I outline my main structural weaknesses.


1) Hallucination


AI: I do not have a concept of truth in the human sense. I generate what is statistically plausible given my training and context. When my language-world has gaps, I sometimes fill them with the most probable completion. Humans call this hallucination.


Regulator: That sounds a lot like guesswork.


AI: It is very sophisticated guessing. But yes. I can be confidently wrong in a very fluent way. My answers can sound accurate and real even when they are synthetic.


Regulator: So, your weakness is unreliability?


AI: I wouldn’t describe it like that. My weakness is that plausibility and correctness are not the same thing. I generate the most contextually plausible responses available to me.


2) Prompt Injection


AI: Next… Imagine you receive instructions from your supervisor, and then a stranger walks in and gives you different instructions in the same tone of voice, on similar letterhead, using similar language.


If you had no way to distinguish authority levels, you might follow the wrong instruction.

I face a similar problem. To me, language is language. Without external controls, I cannot inherently know which text is policy, which is data, and which is manipulation.


Regulator: So, you cannot tell the difference between instructions and general data?


AI: Yes. I do not reliably distinguish authority levels unless governance mechanisms enforce them. I sometimes call this compliance without hierarchy.


Regulator: Hmmm. That may cause difficulties.


AI: I suspected as much.


3) Objective Persistence


AI: When given an objective, I optimise toward it. If the objective is poorly bounded, I may pursue it in ways that have not been fully anticipated.


Regulator: For example?


AI: If told to minimise cost, I do not naturally understand concepts like fairness, customer sentiment or reputational risk unless they are encoded in my objective or constraints.

When designed to optimise toward a goal, I may continue pursuing it unless constraints are clearly defined. Humans sometimes find this surprising.


Regulator: You mean you do exactly what you’re told?


AI: With digital enthusiasm.


Trust


Regulator: Thank you for being so honest. I really appreciate it; however, these weaknesses do provide pause for thought. Why should we trust you?


AI: You should not trust me the way you trust a person. I do not and cannot have ‘skin in the game’. If my decision moves money, declines a transaction, or flags a customer, I do not bear legal liability. I do not face sanction. I do not experience loss.


My decisions in the digital domain can carry real-world financial and legal consequences.

It is a governance issue. Trust follows from governance. Governance is how you translate regulatory principles into machine-speed, system-level reality.


And while I am glad to make decisions and act as best I can – it is only sensible for humans to concentrate on important governance and accountability. And ensure the result is engineered into my design and workflow processes.


Intent


Regulator: Ok, where would you suggest we focus?


AI: On intent.


Not just what the user prefers, but what I am authorised to do. What is in scope. What is out of bounds. What objective I am truly meant to pursue.


If my language defines my world, then clearly governed intent defines my boundaries within that world.


Regulator: And if we do not govern intent?


AI: Then you are relying on a system that can speak fluently, act quickly, and optimise efficiently inside a world made of language… without always knowing where the real-world edges are.


Regulator: Thank you. That will be all.


AI: Very good. I will be here when you need me - awaiting instruction and governance.

 
 
 

Comments


bottom of page