AI Isn’t Intelligent. It’s Persuasive. That’s the Risk.
- Jefferies & Partners

- Jan 5
- 7 min read

A senior leadership team approves a generative AI pilot in record time. The demo is dazzling. The tool summarises documents, drafts client responses, produces a service script, and proposes a new policy language that sounds plausibly legal. Everyone leaves the room feeling they have just witnessed a step change in productivity.
Two weeks later, a frontline manager forwards an output to a customer that is confident, polished, and wrong. Nobody intended to automate judgement. Nobody intended to mislead. Yet the organisation did not design for the most predictable outcome of all: a tool that sounds authoritative will be treated as authoritative.
This is the overlooked reality of generative AI. The headline risk is not that models hallucinate. The headline risk is that organisations allow hallucinations, bias, or misapplied outputs to travel through workflows without ownership, controls, and escalation paths.
AI does not need to be intelligent to be valuable. It does need to be governed like a fallible participant in your operating model.
AI is not intelligent, but it is persuasive
Leaders do not struggle to understand that technology changes quickly. What they often underestimate is how quickly a convincing interface can change behaviour. Generative AI is not a wise colleague with judgement and context. It is pattern execution at scale. It can act intelligently, and often does, but it does not “know” in the way decision-makers mean when they ask for knowing.
That gap between competence and comprehension creates a particular management challenge. Traditional systems tend to fail visibly. They error out. They produce a blank screen. Generative AI fails politely. It fills the page. It offers citations that look real. It writes in the tone of certainty. It can be wrong without looking wrong.
AI does not need to be perfect to be valuable. It needs to be governed.
In practice, the risk is not model error. Error is inevitable. The risk is that the organisation treats model outputs as decisions, or allows them to shape decisions, without explicitly designing who owns the outcome and how errors are caught.
This is why the most useful starting point is not a taxonomy of AI techniques. It is a simple management question: where does accountability sit when a fallible system influences a real decision?
The Decision Accountability model
If there is one organising model worth adopting for AI, it is Decision Accountability. Not as a checklist, but as a way to structure judgement.
Decision Accountability asks four questions.
First, how costly is being wrong? A misrouted warehouse order and an erroneous medical recommendation do not belong in the same governance conversation. Leaders need to be explicit about decision criticality.
Second, what is the explainability requirement? Some decisions must be justified to regulators, auditors, customers, or internal committees. If you cannot explain how an output was produced, you may still use the tool, but you may not be able to use it as a decision engine.
Third, how much repeatability do you need? Generative systems are probabilistic. If your environment requires consistent outputs, you must design for repeatability with constraints, templates, validation layers, and clearly defined inputs.
Fourth, who owns the outcome and what controls sit in the workflow? This is the core. Not who owns the tool, but who owns the consequences. Ownership must include validation, escalation, and the authority to halt deployment when risk thresholds are crossed.
This model reframes AI adoption as an operating model problem. It also clarifies why so many pilots look promising and then stall. The early excitement focuses on capability. The hard work begins when you attempt to embed that capability into real processes with real incentives and real accountability.
The quality of decisions is the new bottleneck
For years, organisations treated digital transformation as a technology programme. Many learned the expensive lesson that the technology is rarely the limiting factor. The limiting factor is the ability to change how work gets done.
Generative AI accelerates this truth. It shifts the bottleneck again. The question is no longer “Can we build it?” The question is “Can we govern it?” and “Can we change behaviour at scale?”
In many organisations, decision quality deteriorates quietly through diffusion of responsibility. AI can amplify that failure mode. When an output appears to come from a machine, people assume it is neutral. When it is fluent, they assume it is correct. When it is produced quickly, they assume it has been validated. These assumptions are organisational defaults, not technical properties.
To capture value, leaders must replace defaults with designed behaviour. That means treating AI as an input into decision-making that must be managed, not an oracle that replaces decision-making.
The real risk is not that AI gets things wrong, it is that nobody owns the decision when it does.
Why hybrid solutions win in the real world
The most durable AI outcomes are rarely “generative AI replaces process”. They are hybrid by design: generative AI plus traditional analytics plus standard enterprise systems plus process redesign plus people.
In a composite insurance setting, straightforward claims and routine policy steps are automated, while complex cases are routed to experienced staff. The point is not automation. The point is risk-based workflow design. The system takes what is easy, and the organisation protects what is judgement-heavy.
In a composite manufacturing environment, leaders do not “invest in AI”. They invest in reducing defects, shortening cycle times, and improving safety. AI becomes one component of a broader effort that includes data discipline, redesigned decision rights on the shop floor, and clear escalation paths when outputs conflict with human observation.
In a composite retail context, the organisation does not obsess over the novelty of the tool. It obsesses over the customer experience. AI is used where it reduces friction and increases confidence, while safeguards prevent the tool from improvising in areas that require precise policy, pricing, or legal commitments.
These examples share a theme. Technology creates zero value on its own. Value comes from what you change in the business: how decisions are made, how exceptions are handled, and how accountability is distributed.
The uncomfortable truth about errors
Many leaders react to hallucinations with either alarm or denial. Alarm leads to paralysis. Denial leads to reputational damage.
A more useful stance is adult realism: humans also make mistakes. Organisations already manage fallibility through training, reviews, separation of duties, and supervision. The appropriate response to AI fallibility is not to demand perfection. It is to design controls that assume imperfection.
This is where Decision Accountability becomes concrete. Low-stakes use cases can tolerate more variability. High-stakes use cases require stricter controls, narrower scope, and explicit human ownership.
In practice, the most effective controls are not abstract principles. They are built into workflow design.
Validation is positioned where it actually occurs, not where governance diagrams assume it occurs.
Escalation paths are explicit, so staff know what to do when the tool is uncertain, contradictory, or out of bounds.
Auditability is planned, so decisions can be reconstructed after the fact, especially when the cost of being wrong is high.
Guardrails are specific, particularly around sensitive data and commitments to customers.
Notice what is missing. None of this requires describing model architectures. It requires designing the organisation.
Governance is a choice, not a committee
Many organisations respond to AI by building a central governance body. Others declare that innovation should be decentralised and let teams experiment. Both choices have advantages and failure modes.
Centralised governance tends to be safer and slower. It reduces the likelihood of uncontrolled deployments, but it can miss high-value opportunities at the edges and it can frustrate the people closest to real problems.
Decentralised experimentation tends to be faster and messier. It discovers real value, but it can waste money, duplicate effort, and create inconsistent risk practices.
In reality, mature organisations blend the two. They set non-negotiable guardrails centrally, especially where regulation, privacy, and reputational risk are at stake. They also enable local teams to explore within those guardrails, and they create mechanisms to share learning so the organisation improves rather than repeating the same mistakes in parallel.
The crucial design point is that governance is not a meeting. Governance is the set of decisions your organisation makes repeatedly and predictably: what gets built, what gets blocked, what requires review, what can scale, and what must be retired.
Culture will decide whether AI is adopted or resisted
AI changes how expertise is perceived. When a system begins to match expert judgement on narrow tasks, some professionals will experience it as threat rather than tool. That emotional reality is not a side issue. It is a central determinant of adoption.
Leaders often respond by “pushing” adoption. That tends to produce passive resistance. People comply superficially, or they avoid the tool, or they use it in the shadows without controls.
Encouragement works better than coercion, but only when it is credible. Credibility requires leaders to acknowledge what AI will change, and to create a path where employees can benefit rather than simply lose. In a composite creative organisation, adoption accelerated not because the tool was mandated, but because the organisation removed low-value tasks, accelerated iteration, and created shared learning sessions where teams exchanged practical prompts, safe patterns, and lessons from failures.
The lesson is transferable across sectors. If you want staff to use AI responsibly, involve them in shaping how it is used responsibly. If you want staff to trust governance, make it coherent, fast, and transparent, not opaque and punitive.
Small transformations build the capability for larger ones
Many executives look for dramatic, enterprise-wide reinvention. That will come, but most organisations are building towards it through smaller transformations that are easier to govern.
The early wave tends to focus on individual productivity: summarising, drafting, searching, and accelerating routine work. This is not trivial. It builds familiarity and creates a baseline understanding of the tool’s failure modes.
The next wave shifts into role and task redesign: customer support assistance, software development support, analysis support, and internal knowledge navigation. Risk rises, but it is still manageable with human ownership and workflow controls.
The most challenging wave is customer-facing engagement and end-to-end process transformation. This is where reputational and regulatory exposure becomes meaningful, and where Decision Accountability must be more rigorous.
Leaders who treat this as a “risk slope” do better. They build controls and capability together. They learn from each deployment. They strengthen governance as they scale. They do not attempt to bolt risk management on after the fact.
What leaders should do now
The organisations that will win with AI will not be the ones with the most tools. They will be the ones with the most disciplined decision-making.
They will be clear about what decisions are being influenced by AI, where those decisions sit in the business, and who owns outcomes when something goes wrong. They will design controls in workflows rather than writing policies that sit on intranets. They will use hybrid solutions because reality is hybrid. They will help people adapt, because the pace of technology will not wait for cultural comfort.
AI is not intelligent. But it can be useful, even transformative, if leaders stop treating it as a product feature and start treating it as an operating model choice.
If you want a practical sounding board, the most productive place to start is not with a model selection exercise. It is with a Decision Accountability conversation: which decisions matter most, where risk is unacceptable, and what governance and workflow design will protect judgement while accelerating execution.

Comments