White Paper
Who Is Allowed to Decide?
The Gap Every AI Governance Framework Points To
A review of the 2026 advisory consensus on AI governance — Deloitte, EY, PwC, McKinsey, KPMG, BCG, Accenture, Gartner — showing that every framework converges on the same unnamed architectural layer. This paper argues that a Constitutional Reasoning System is what the frameworks point toward but never supply, and positions Juris as the operational answer the enterprise now needs.
Key Insights
Eight advisory frameworks reviewed side-by-side — Deloitte, EY, PwC, McKinsey, KPMG, BCG, Accenture, Gartner — each with its own vocabulary for AI governance, all pointing at the same missing piece
The convergence thesis: despite differing language and house styles, every framework describes the same architectural layer between written policy and automated execution — the layer that decides who, under what rules, is allowed to decide
Why current stacks cannot supply that layer — rules engines, GRC platforms, LLM agents, and dashboards each solve adjacent problems but none carries the formal decision authority the frameworks require
A Constitutional Reasoning System as the operational answer — one canonical, versioned model of decision logic, with traced evaluation, signed certification, and reasoned refusal as the baseline contract
Written for Chief Risk Officers, Chief Compliance Officers, General Counsel, and Heads of AI Governance in regulated industries — banking, insurance, pharmaceuticals, asset management, and the public sector — together with the advisory firms whose frameworks this paper examines.
Download the White Paper
Required fields
Check Your Email
We've sent a secure download link to your email address. The link expires in 1 hour and is single-use.