<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Governance Today]]></title><description><![CDATA[Making sense of AI policy, one week at a time. Breaking down new regulations, explaining governance frameworks, and tracking the global effort to govern artificial intelligence responsibly.]]></description><link>https://aigovernancetoday.substack.com</link><generator>Substack</generator><lastBuildDate>Sun, 05 Apr 2026 02:47:02 GMT</lastBuildDate><atom:link href="https://aigovernancetoday.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Anmol Kumar]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aigovernancetoday@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aigovernancetoday@substack.com]]></itunes:email><itunes:name><![CDATA[Anmol Kumar]]></itunes:name></itunes:owner><itunes:author><![CDATA[Anmol Kumar]]></itunes:author><googleplay:owner><![CDATA[aigovernancetoday@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aigovernancetoday@substack.com]]></googleplay:email><googleplay:author><![CDATA[Anmol Kumar]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[AI Governance Today • Issue #13 • Feb 24 2026 The Global Realignment of AI Governance]]></title><description><![CDATA[Welcome to the 13th issue of AI Governance Today. If previous months were defined by regulatory architecture, this week was defined by geopolitical alignment. AI governance is moving from compliance frameworks and risk classifications to a contest over influence, investment, and global norm-setting. From New Delhi&#8217;s AI Impact Summit to escalating enforcement actions in Europe and strategic positioning in Washington, the center of gravity is shifting. Governance goes beyond managing AI systems. It involves shaping the global rules and systems that determine how AI operates and influences the world.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-13-feb</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-13-feb</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 24 Feb 2026 17:41:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xRCJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xRCJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the <strong>13th issue of AI Governance Today</strong>. If previous months were defined by regulatory architecture, this week was defined by geopolitical alignment. AI governance is moving from compliance frameworks and risk classifications to a contest over influence, investment, and global norm-setting. From New Delhi&#8217;s AI Impact Summit to escalating enforcement actions in Europe and strategic positioning in Washington, the center of gravity is shifting. Governance goes beyond managing AI systems. It involves shaping the global rules and systems that determine how AI operates and influences the world.</p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p><strong>India&#8217;s AI Impact Summit put geopolitics at the center of governance</strong>, positioning AI oversight as a contest over influence, alliances, and industrial advantage, not just risk frameworks.</p></li><li><p>The <strong>Delhi Declaration gathered 80+ countries</strong> behind principles like equitable access, transparency, and human oversight, signaling <strong>broad alignment without binding enforcement</strong>.</p></li><li><p>The <strong>U.S. reiterated resistance to centralized global AI oversight</strong>, reinforcing a governance path built around national flexibility, agency control, and bilateral/multilateral partnerships rather than a global authority.</p></li><li><p><strong>Europe&#8217;s enforcement posture sharpened through existing law</strong>, with <strong>Spain opening a criminal probe</strong> into major platforms over alleged AI-generated child abuse content, escalating platform accountability for AI outputs.</p></li><li><p><strong>Enforcement pressure is rising on generative harms</strong>, as reports of AI-generated illegal imagery increase and authorities signal tougher scrutiny and safeguards expectations.</p></li><li><p><strong>National security governance intensified</strong>, with <strong>Anthropic negotiating DoD terms</strong> that spotlight the collision between military demand, corporate safety commitments, and acceptable-use boundaries.</p></li><li><p><strong>Governance is being negotiated through capital and infrastructure</strong>, highlighted by <strong>Microsoft&#8217;s $50B Global South AI investment push</strong>, tying access, capability-building, and standards influence together.</p></li><li><p><strong>Public legitimacy became a governance variable</strong>, as the U.S. &#8220;botlash&#8221; gained momentum, showing how civic backlash can shape regulatory appetite and political narratives.</p></li><li><p><strong>Tech&#8217;s political engagement continued to expand globally</strong>, with industry actors increasingly shaping AI policy agendas, not just responding to them.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kqjY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kqjY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!kqjY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!kqjY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!kqjY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kqjY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:422295,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/189034054?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kqjY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!kqjY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!kqjY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!kqjY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88d791d0-9662-41fa-a253-c9cfe8ee25c9_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>India&#8217;s AI Impact Summit Signals a New Axis of Global Influence</strong></h4><p>This past week marked a pivotal shift in the <em>geopolitics of AI governance</em>, far beyond technical standards and high-risk classifications. The <strong>India AI Impact Summit 2026</strong> in New Delhi crystallized a growing reality: <strong>AI governance is now as much about strategic power, alliances, and economic competition as it is about risk management.</strong></p><p>For the first time, the global AI agenda was convened in the <strong>Global South</strong> with nearly 90 countries and international organizations endorsing a collective vision for AI&#8217;s future, a declaration built around equity, inclusive cooperation, and shared benefits, but also shaped by geopolitical undercurrents. What&#8217;s emerging isn&#8217;t a singular regulatory model, but a <em>multipolar governance ecosystem</em> where national strategies, diplomatic alignments, and economic imperatives intersect.</p><p>Unlike previous summits, notably the UK&#8217;s safety-focused gathering in 2023 or France&#8217;s <em>AI Action Summit</em> in 2025, New Delhi&#8217;s forum was explicitly positioned as a <strong>&#8220;global AI governance and impact&#8221;</strong> conversation hosted by a middle power charting its own path. The event brought together heads of state, major AI CEOs, and multilateral actors under a banner of <em>shared prosperity through technology</em>, rather than <em>top-down enforcement</em>.</p><p><strong>This matters for several reasons:</strong></p><p><strong>1. A Broader Global Consensus (But Not a Uniform Model)</strong><br>The <em>New Delhi Declaration</em> was backed by <strong>88+ countries and international bodies</strong>, including major powers across different blocs, the United States, China, Russia, the EU and numerous developing nations, signaling unusually broad participation in a global AI governance initiative. The declaration&#8217;s seven pillars emphasize equitable access, secure and trusted systems, human capital development, and resilient innovation.</p><p>This broad endorsement suggests <strong>AI governance will not be defined by a single dominant standard</strong>, but by <strong>negotiated coalitions</strong> that balance national priorities, economic interests, and ethical norms.</p><p><strong>2. Geopolitical Alignment Through Partnerships</strong><br>India used the summit to anchor itself within emerging technology coalitions, including participation in the U.S.-led <em>Pax Silica</em> initiative and infrastructure commitments with major players like OpenAI, Google, Microsoft and Amazon. This aligns India closer to Western tech ecosystems while still championing the <em>Global South</em> perspective, positioning AI governance as both a strategic partnership and a national development priority.</p><p><strong>3. Divergence Between Governance Goals and Regulatory Enforcement</strong><br>While the declaration emphasizes cooperation and inclusive development, it is <strong>not a binding regulatory framework</strong>; it stops short of harmonized enforceable rules. This contrasts sharply with the <em>EU AI Act&#8217;s</em> tightening high-risk perimeters and enforcement mechanisms, highlighting a <strong>geopolitical divide</strong> in governance approaches:</p><ul><li><p><strong>EU model:</strong> rule-based compliance with stiff penalties</p></li><li><p><strong>India/global consensus model:</strong> voluntary principles with broad participation</p></li><li><p><strong>U.S. stance:</strong> resistance to centralized global governance in favor of flexible bilateral or multilateral partnerships</p></li></ul><p><strong>4. Economic and Competitive Stakes Are Front and Center</strong><br>The summit was as much about investment and industrial strategy as it was about principles. India and its partners signaled <strong>hundreds of billions of dollars in AI infrastructure plans and technology commitments</strong>, underscoring that governance now intertwines with economic diplomacy and competitiveness.</p><p><strong>5. Soft Power and Narrative Control</strong><br>India&#8217;s messaging from Prime Minister <em>Narendra Modi&#8217;s</em> emphasis on human-centric AI to the summit&#8217;s guiding principle of <em>&#8220;Sarvajan Hitaya, Sarvajan Sukhaya&#8221;</em> (welfare and happiness for all) illustrates how nations now use AI governance forums to project values and influence global norms around equity, inclusivity, and technological sovereignty.</p><p><strong>Key Takeaways: </strong></p><ul><li><p><strong>AI governance is entering the realm of geopolitical contestation.</strong> It is no longer a siloed regulatory policy domain; it is embedded in strategic alliances, economic competition, and diplomatic influence.</p></li><li><p><strong>Global South voices are shaping the narrative.</strong> India&#8217;s summit marked the first time a major AI governance forum was hosted in and driven by a middle-power nation with global participation.</p></li><li><p><strong>Multipolar governance frameworks will likely coexist.</strong> EU enforcement regimes, U.S. flexible partnerships, and broad consensus declarations like New Delhi&#8217;s sketch an ecosystem where <em>cooperation and competition</em> are both drivers.</p></li><li><p><strong>Economic clout and policy influence are now inseparable.</strong> Investment pledges and tech alliances at the summit point to governance being negotiated alongside infrastructure, capacity building, and market access.</p></li><li><p><strong>Declarations matter but enforceable rules are still contested.</strong> The New Delhi Declaration builds political momentum, but real regulatory and compliance regimes will be shaped through future summits, bilateral agreements, and regional strategies.</p></li></ul><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CKWq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CKWq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 424w, https://substackcdn.com/image/fetch/$s_!CKWq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 848w, https://substackcdn.com/image/fetch/$s_!CKWq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 1272w, https://substackcdn.com/image/fetch/$s_!CKWq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CKWq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png" width="942" height="894" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:894,&quot;width&quot;:942,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1761930,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/189034054?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F08ee32f3-216f-4bd8-8f63-8e2d939417b0_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CKWq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 424w, https://substackcdn.com/image/fetch/$s_!CKWq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 848w, https://substackcdn.com/image/fetch/$s_!CKWq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 1272w, https://substackcdn.com/image/fetch/$s_!CKWq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5301097b-039a-41b0-86fd-647048cf5dc6_942x894.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong><a href="https://time.com/7379272/spain-x-elon-musk-grok-ai-meta-tiktok-investigation-sexualized-deepfakes-children/">1. Spain Opens Criminal Probe Into AI-Generated Child Abuse Content</a></strong></p><p><strong>Region: European Union (Spain)</strong></p><p>Spanish authorities have launched a criminal investigation into X (formerly Twitter), Meta, and TikTok over alleged circulation of AI-generated child sexual abuse material. Prosecutors are examining whether platforms failed to prevent algorithmic amplification and distribution of harmful synthetic content. This marks a significant escalation: existing criminal law is being directly applied to AI-generated outputs, not merely platform moderation practices.</p><div><hr></div><p><strong><a href="https://economictimes.indiatimes.com/tech/artificial-intelligence/delhi-declaration-urges-democratic-ai-for-social-good/articleshow/128675192.cms">2. Delhi Declaration Advances Multilateral AI Governance Principles</a></strong></p><p><strong>Region: India / Global</strong></p><p>More than 80 countries endorsed the <em>Delhi Declaration</em> at the India AI Impact Summit, calling for democratic, inclusive, and equitable AI development. The declaration emphasizes transparency, human oversight, and responsible innovation, reflecting growing multilateral coordination efforts. While non-binding, the declaration signals continued momentum toward globally aligned AI governance norms.</p><div><hr></div><p><strong><a href="https://www.straitstimes.com/world/united-states/us-totally-rejects-global-ai-governance-white-house-adviser">3. U.S. Signals Resistance to Centralized Global AI Oversight</a></strong></p><p><strong>Region: United States</strong></p><p>At the India AI Impact Summit, U.S. officials reiterated opposition to establishing a centralized global AI governance authority. The position underscores Washington&#8217;s preference for national regulatory flexibility and sector-specific oversight rather than treaty-based international governance. The divide reflects broader geopolitical tensions over who sets global AI rules.</p><div><hr></div><p><strong><a href="https://www.theverge.com/ai-artificial-intelligence/883456/anthropic-pentagon-department-of-defense-negotiations">4. Anthropic Negotiates AI Use Terms With U.S. Department of Defense</a></strong></p><p><strong>Region: United States</strong></p><p>Anthropic is reportedly engaged in high-level negotiations with the U.S. Department of Defense over military deployment conditions for its models. The company has resisted provisions that could permit mass surveillance or autonomous weapons use.</p><p>The episode highlights intensifying governance tensions between national security interests and corporate AI safety commitments.</p><div><hr></div><p><strong><a href="https://www.reuters.com/world/china/microsoft-says-it-is-pace-invest-50-billion-global-south-ai-push-2026-02-18/">5. Microsoft Announces $50B Global South AI Investment Push</a></strong></p><p><strong>Region: Global (Emerging Markets)</strong></p><p>Microsoft announced plans to invest approximately $50 billion through the decade to expand AI infrastructure and capabilities in emerging markets. The initiative includes compute infrastructure, skilling programs, and ecosystem development.</p><p>Large-scale corporate investment strategies are increasingly shaping the governance landscape by influencing standards, infrastructure access, and geopolitical alignment.</p><div><hr></div><p><strong><a href="https://www.ft.com/content/ecead6b9-eb42-4a85-bd33-073c659e84bf">6. Public &#8220;Botlash&#8221; Movement Gains Momentum in the U.S.</a></strong></p><p><strong>Region: United States</strong></p><p>A growing civic backlash against AI deployment, referred to as &#8220;botlash&#8221;, is gaining visibility in the U.S. Activists and community groups are raising concerns about privacy, labor displacement, and unchecked automation.</p><p>While not regulatory in itself, sustained civic pressure can shape legislative priorities and enforcement appetite in 2026.</p><div><hr></div><p><strong><a href="https://www.theguardian.com/global/2026/feb/24/tech-politics-ai-impact-summit-silicon-valley">7. Tech Industry Expands Political Influence in AI Policy Debates</a></strong></p><p><strong>Region: Global</strong></p><p>Technology companies are intensifying political engagement around AI governance, both domestically and internationally. Industry-backed advocacy efforts are increasing alongside global AI summits and regulatory negotiations.</p><p>This trend signals that corporate actors are not merely responding to regulation &#8212; they are actively shaping its trajectory.</p><div><hr></div><p><strong><a href="https://www.reuters.com/world/us/rise-ai-generated-child-sexual-imagery-reports-2026-02-20/">8. Surge in AI-Generated Harm Content Reports Raises Enforcement Pressure</a></strong></p><p><strong>Region: United States / Europe</strong></p><p>Authorities have reported a significant rise in AI-generated illegal imagery cases, increasing pressure on platforms and developers to implement stronger safeguards. Law enforcement agencies are signaling stricter scrutiny of generative model misuse. The enforcement focus is shifting from theoretical risk to active prosecution.</p><h3><strong>Framework Focus</strong></h3><h4><strong>The Three Competing Models of AI Governance</strong></h4><p>AI governance is no longer converging toward a single global standard. Instead, three distinct models are taking shape, each reflecting different political priorities and strategic objectives. The India AI Impact Summit did more than produce a declaration; it clarified that AI oversight is becoming multipolar.</p><p>The European Union represents the first model: regulatory discipline. Through the EU AI Act, governance is binding, risk-based, and enforcement-driven. Classification determines obligations, conformity assessments are mandatory, and penalties are significant. Trust is built through codified compliance and supervisory authority.</p><p>The United States reflects a second model: executive and procurement-led governance. Rather than a single comprehensive AI statute, oversight flows through agency mandates, federal guidance, contractual leverage, and national security framing. This approach prioritizes flexibility and strategic control while resisting centralized global oversight structures.</p><p>India&#8217;s approach, highlighted at the AI Impact Summit, represents a third model: strategic development governance. Here, AI governance is framed as economic infrastructure. The emphasis is on capability-building, inclusive growth, coalition formation, and international alignment rather than immediate enforcement. Principles precede penalties, and governance is embedded within national development strategy.</p><p>These models are not identical, nor are they converging. One centers on regulatory discipline, another on operational control, and the third on developmental strategy. For enterprises operating globally, this means AI governance is no longer just a compliance function. It is a geopolitical variable.</p><p>The future of AI oversight will likely be shaped not by a single dominant framework, but by the coexistence and competition of these three approaches.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence">European Commission AI Act Implementation Workshop</a></strong></p><p><strong>Region: European Union | Expected: March 2026</strong><br>The European Commission is continuing stakeholder engagement sessions focused on high-risk system classification and conformity assessment capacity. These workshops are increasingly important as companies prepare for documentation and supervisory scrutiny under Article 6 classifications.</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026: Privacy | Washington, D.C., USA</a></strong></p><p><strong>Date:</strong> March 30&#8211;April 2, 2026<br>A flagship IAPP event on digital responsibility and governance. Sessions span operational AI policy implementation, vendor risk, compliance frameworks, international enforcement trends, and practical case studies on governance program scaling.</p></li><li><p><strong><a href="https://www.humanx.co/">HumanX 2026 | San Francisco, USA</a></strong></p><p><strong>Date:</strong> April 6&#8211;9, 2026<br>One of the largest independent AI conferences globally, featuring executives, technologists, policymakers, and investors. While not exclusively governance-focused, HumanX is essential for anyone tracking the intersection of AI regulation, business adoption, and strategy, with sessions on ethical deployment, trust frameworks, and real-world impact.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>AI governance is entering a new phase. Declarations, enforcement actions, investment pledges, and defense negotiations are no longer isolated developments; they reflect a broader structural shift. Regulation is increasingly intertwined with economic strategy, industrial policy, and international alignment.</p><p>As governance models diverge across regions, organizations must think beyond compliance checklists. The ability to operate across different regulatory philosophies and anticipate where expectations are tightening will define leadership in this next phase.</p><p>As AI governance evolves across regions, one question should guide every boardroom discussion: <strong>are you preparing for isolated regulatory requirements, or for a world of competing governance frameworks?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #12 • Feb 17 2026 The Age of Classification Accountability
]]></title><description><![CDATA[Welcome to the 12th issue of AI Governance Today. If the last few months were about building AI governance frameworks, the past three weeks have been about drawing boundaries. From Brussels to Tokyo to Washington, regulators are moving beyond broad principles and into operational clarity, defining who falls within high-risk categories, how accountability will be demonstrated, and what evidence organizations must be prepared to produce.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-12-feb</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-12-feb</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 17 Feb 2026 16:14:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1GQM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xRCJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the <strong>12th issue of AI Governance Today</strong>. If the last few months were about building AI governance frameworks, the past three weeks have been about drawing boundaries. From Brussels to Tokyo to Washington, regulators are moving beyond broad principles and into operational clarity, defining who falls within high-risk categories, how accountability will be demonstrated, and what evidence organizations must be prepared to produce. </p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>The EU AI Act has entered its <strong>classification phase</strong>, Article 6 high-risk definitions are moving from abstract interpretation to practical enforcement triage.</p></li><li><p>High-risk designation now clearly triggers conformity assessments, documentation obligations, incident reporting, and fines up to &#8364;35M or 7% of global turnover.</p></li><li><p>Enterprises must accelerate <strong>AI inventory mapping and risk classification</strong> as interpretive flexibility narrows.</p></li><li><p>Conformity assessment capacity may become a 2026 bottleneck.</p></li><li><p>EU Member States are strengthening supervisory coordination ahead of active cross-border enforcement.</p></li><li><p>U.S. federal agencies continue advancing operational AI accountability through structured impact and oversight mandates.</p></li><li><p>Japan&#8217;s 2026 AI Basic Plan positions governance as economic infrastructure &#8212; prioritizing capability-building and adaptive oversight.</p></li><li><p>The UK AI Safety Institute has begun structured model evaluations, signaling a more formalized safety testing regime.</p></li><li><p>Financial regulators globally are raising concerns about autonomous and agentic AI systems.</p></li><li><p>Existing laws (e.g., GDPR) are increasingly being used to enforce AI content accountability.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Rq0X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Rq0X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!Rq0X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!Rq0X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!Rq0X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Rq0X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:67334,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/188271867?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Rq0X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!Rq0X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!Rq0X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!Rq0X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbcbb61e3-d670-4a9f-9e43-45facbb131e5_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><a href="https://secureprivacy.ai/blog/eu-ai-act-2026-compliance">The High-Risk Line Is Being Drawn: EU AI Act Moves From Text to Triage</a></h4><p>Between January 27 and mid-February, the European AI governance conversation shifted in a subtle but decisive way. The debate is no longer about what the EU AI Act <em>intends</em> to regulate. It is now about <em>who falls inside the high-risk perimeter</em> and what that means in operational terms.</p><p><a href="https://artificialintelligenceact.eu/article/6/">Article 6 of the EU AI Act defines</a> when an AI system qualifies as &#8220;high-risk.&#8221; Until recently, many organizations relied on broad interpretation and cautious internal mapping. But regulators are now moving toward more concrete implementation guidance, narrowing ambiguity around sectoral examples and use-case thresholds.</p><p>This matters because classification determines compliance gravity.</p><p>High-risk designation triggers:</p><ul><li><p>Mandatory conformity assessments</p></li><li><p>Robust technical documentation requirements</p></li><li><p>Post-market monitoring obligations</p></li><li><p>Incident reporting to authorities</p></li><li><p>Human oversight safeguards</p></li><li><p>Board-level accountability exposure</p></li><li><p>Fines up to &#8364;35M or 7% of global turnover</p></li></ul><p>In other words, misclassification is no longer a theoretical compliance risk. It is a material governance failure.</p><p>Several use-case categories are emerging as focal points for scrutiny:</p><ul><li><p>AI used in recruitment and worker evaluation</p></li><li><p>Biometric identification and categorization systems</p></li><li><p>Creditworthiness and financial risk scoring tools</p></li><li><p>AI in public administration decision-making</p></li><li><p>Systems affecting access to essential services</p></li></ul><p>The practical shift underway is from principle to triage.</p><p>Enterprises are now confronting three urgent realities:</p><p><strong>1. AI Inventory Mapping Is No Longer Optional</strong></p><p>Organizations must maintain a defensible registry of all AI systems, broken down by function, deployment context, and risk exposure. Composite systems are being decomposed into sub-functions to determine whether specific components independently trigger high-risk classification.</p><p><strong>2. Documentation Debt Is Surfacing</strong></p><p>Many AI deployments preceded structured governance documentation. As classification lines sharpen, companies are retrofitting traceability &#8212; including training data summaries, model evaluation records, and decision log retention policies.</p><p><strong>3. Conformity Capacity May Become a Bottleneck</strong></p><p>Notified bodies and certification mechanisms are limited. As more systems are formally categorized high-risk, conformity assessment queues could become a significant operational choke point later in 2026.</p><p>What makes this moment significant is not that the law changed. It is that the interpretive margin is shrinking.</p><p>For multinational enterprises, the EU&#8217;s clarification phase has a cascading effect. Internal AI governance programs built for &#8220;future enforcement&#8221; are now being stress-tested against near-term supervisory expectations. Boards are asking different questions. Risk committees are requesting classification briefings. Procurement teams are inserting AI compliance representations into vendor contracts.</p><p>The EU has moved from drafting rules to drawing lines. And once lines are drawn, regulators expect evidence. The next phase of the AI Act will not be defined by philosophical debates about risk. It will be defined by whether organisations can demonstrate in a structured, auditable form that they understand where their systems fall on the spectrum.</p><p>The era of interpretive comfort is ending.</p><p>The era of classification accountability has begun.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1GQM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1GQM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 424w, https://substackcdn.com/image/fetch/$s_!1GQM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 848w, https://substackcdn.com/image/fetch/$s_!1GQM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 1272w, https://substackcdn.com/image/fetch/$s_!1GQM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1GQM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png" width="1024" height="965" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:965,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1676689,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/188271867?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540c9345-72e9-429a-aa19-efde09520c00_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1GQM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 424w, https://substackcdn.com/image/fetch/$s_!1GQM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 848w, https://substackcdn.com/image/fetch/$s_!1GQM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 1272w, https://substackcdn.com/image/fetch/$s_!1GQM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcca7c856-4deb-4787-93cf-e82d7232eb2f_1024x965.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. <a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence">EU Member States Move Toward Supervisory Coordination</a></strong></p><p><strong>Region:</strong> European Union<br>Germany, France, and other EU Member States are accelerating work on national AI supervisory structures and coordination as the EU AI Act enters operational phases. National authorities are preparing oversight frameworks and refining roles for enforcement and audits under the bloc&#8217;s risk-based regime. </p><div><hr></div><p><strong>2. <a href="https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf">U.S. Federal Agencies Advance Operational AI Accountability</a></strong></p><p><strong>Region:</strong> United States<br>Federal AI governance guidance continues to evolve within the executive branch. Memoranda from the Office of Management and Budget (OMB) emphasize structured impact assessments, human review mandates, transparency practices, and risk monitoring for agency AI use, shaping compliance expectations ahead of formal legislation. </p><div><hr></div><p><strong>3. <a href="https://www8.cao.go.jp/cstp/ai/ai_plan/aiplan_eng_20260116.pdf">Japan Strengthens Transparency and Public Sector AI Controls</a></strong></p><p><strong>Region:</strong> Japan<br>Japan&#8217;s ongoing AI governance strategy includes updates to procurement standards and transparency expectations for public-sector AI deployments. While largely principle-based, the focus on explainability, accountability, and public trust signals a more operational governance approach in the Asia-Pacific. </p><div><hr></div><p><strong>4. <a href="https://www.gov.uk/government/publications/ai-safety-institute-approach-to-evaluations/ai-safety-institute-approach-to-evaluations">UK AI Safety Institute Begins Structured Model Evaluations</a></strong></p><p><strong>Region:</strong> United Kingdom<br>The UK AI Safety Institute, now on a statutory trajectory, has started formal evaluation engagements with advanced model developers. These structured assessments include pre-release and scenario-based testing frameworks aimed at improving safety without punitive enforcement as a first resort. </p><div><hr></div><p><strong>5. <a href="https://www.investmentnews.com/regulation-legal-compliance/finra-flags-rise-of-agentic-ai-seeks-member-firms-feedback/264996">Financial Regulators Raise Concerns Over Autonomous Agents</a></strong></p><p><strong>Region:</strong> Global<br>Regulatory bodies, including U.S. and UK financial authorities, are flagging risks associated with autonomous and agentic AI systems in trading, advisory bots, and related functions. Firms are being urged to assess oversight, auditability, and human control as agents expand beyond traditional automation. </p><div><hr></div><p><strong>6. <a href="https://www.ft.com/content/3c720c9b-d2b6-41b3-a7b7-960b5f1fb94b">EU Privacy Watchdog Probes AI-Generated Content Risks</a></strong></p><p><strong>Region:</strong> European Union<br>The Irish Data Protection Commission launched an investigation into X (formerly Twitter) over AI-generated sexualized imagery, applying GDPR and content safety expectations to AI outputs. This highlights how existing privacy and safety laws are being used to enforce responsible AI content practices.</p><div><hr></div><p><strong>7. <a href="https://timesofindia.indiatimes.com/technology/tech-news/india-ai-impact-summit-2026-live-updates-new-delhi-pm-modi-bharat-mandapam-niti-aayog-february-16/liveblog/128405940.cms">India Hosts AI Impact Summit and Pushes Platform Accountability</a></strong></p><p><strong>Region:</strong> India / Global<br>India&#8217;s AI Impact Summit in New Delhi brought global leaders together to discuss governance centered on societal impact and inclusive innovation. Subsequently, the Indian government also signaled tougher accountability demands on tech platforms regarding constitution-aligned content moderation and AI usage.</p><div><hr></div><p><strong>8. <a href="https://www.axios.com/2026/02/12/anthropic-millions-ai-policy-fight">Anthropic Commits $20M to AI Policy Advocacy</a></strong></p><p><strong>Region:</strong> Private Sector / United States<br>AI developer Anthropic announced a $20 million initiative to support bipartisan policy advocacy, signalling increased industry engagement in shaping governance frameworks, transparency norms, and responsible deployment standards at the national level.</p><h3><strong>Framework Focus</strong></h3><h4><a href="https://www8.cao.go.jp/cstp/ai/ai_plan/aiplan_eng_20260116.pdf">Japan&#8217;s 2026 AI Basic Plan: Governance as Economic Infrastructure</a></h4><p>While much of the global AI governance debate centers on risk containment, Japan&#8217;s newly approved <strong>Artificial Intelligence Basic Plan (2026)</strong> frames AI governance as a pillar of national competitiveness.</p><p>Adopted by Cabinet decision at the end of December 2025, the Plan establishes Japan&#8217;s first consolidated national AI strategy under a statutory framework. Rather than introducing punitive controls, it outlines a coordinated, whole-of-government roadmap designed to accelerate AI deployment while embedding trust, accountability, and international alignment.</p><p>What distinguishes Japan&#8217;s approach is its integration of governance into economic strategy.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ln8w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ln8w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 424w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 848w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 1272w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ln8w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png" width="1024" height="746" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:746,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1310828,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/188271867?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F111efbff-8fe1-4fb3-b040-eca93fc12758_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ln8w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 424w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 848w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 1272w, https://substackcdn.com/image/fetch/$s_!ln8w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F151b0fcb-dee6-4cb0-ab72-185a30b13b3f_1024x746.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The Plan rests on three structural pillars:</p><p><strong>1. Accelerated AI Utilization Across Society</strong></p><p>The government commits to leading by example, expanding AI adoption across public administration, healthcare, disaster management, infrastructure, and education. AI is positioned not merely as a technological upgrade but as a tool for addressing demographic pressures, productivity stagnation, and labour shortages.</p><p>Unlike purely regulatory frameworks, Japan&#8217;s model treats AI deployment itself as a public policy objective.</p><p><strong>2. Strengthening Domestic AI Capabilities</strong></p><p>The Plan emphasizes investment in foundational AI capabilities, compute infrastructure, data environments, research ecosystems, and talent pipelines. Startups and domestic innovation capacity receive particular attention, reflecting concern that Japan has lagged behind the U.S. and China in frontier AI development.</p><p>Governance, in this context, is meant to create predictability that encourages investment rather than deterring it.</p><p><strong>3. Leading AI Governance and International Norm-Setting</strong></p><p>Japan explicitly positions itself as a contributor to global AI governance architecture. The Plan reinforces transparency, explainability, and accountability principles while aligning with risk-based approaches emerging in Europe and elsewhere.</p><p>However, Japan&#8217;s tone remains principle-driven rather than enforcement-heavy. Instead of immediate fines or sanctions structures, the framework emphasizes guidance, public trust, and iterative refinement.</p><p><strong>The Operational Dimension: PDCA Governance</strong></p><p>One of the most important elements of the Plan is its commitment to a continuous <strong>PDCA (Plan&#8211;Do&#8211;Check&#8211;Act)</strong> cycle.</p><p>This signals that Japan views AI governance as dynamic. Policies will be evaluated, adjusted, and updated as technology evolves. Rather than a static compliance regime, the framework anticipates adaptive oversight.</p><p>For enterprises, this creates both flexibility and expectation. Organizations operating in Japan should anticipate increasing clarity around:</p><ul><li><p>Public sector AI procurement standards</p></li><li><p>Transparency expectations in government-linked deployments</p></li><li><p>Human oversight structures</p></li><li><p>Documentation practices for explainability and accountability</p></li></ul><p>While the Plan does not introduce immediate high-penalty enforcement mechanisms, it lays the foundation for progressively structured oversight.</p><p>Japan&#8217;s AI Basic Plan illustrates a third governance model emerging alongside the EU&#8217;s enforcement-heavy risk classification system and the U.S.&#8217;s procurement-driven oversight.</p><p>If the EU model is regulatory discipline, and the U.S. model is executive governance through agency control, Japan&#8217;s model is <strong>strategic governance as economic policy</strong>.</p><p>It treats trust not as a constraint on innovation, but as infrastructure for it.</p><p>As AI governance frameworks mature worldwide, Japan&#8217;s approach demonstrates that regulation does not always begin with restriction. It can begin with coordination, capability-building, and international alignment, with enforcement evolving gradually over time.</p><p>The long-term question will be whether principle-based governance can scale as AI systems become more autonomous and higher impact.</p><p>For now, Japan has made clear that in its view, AI governance is not a brake on growth. It is the architecture that enables it.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://iapp.org/conference/iapp-uk-intensive">IAPP UK Intensive 2026: Privacy | London, UK</a></strong></p><p><strong>Date:</strong> February 23&#8211;26, 2026<br>Hosted by the International Association of Privacy Professionals, this intensive conference delves into emerging AI regulation, privacy obligations, risk management, and evolving compliance frameworks in Europe and the UK, ideal preparation as the EU AI Act high-risk rules mature. </p></li><li><p><strong><a href="https://iapp.org/conference/iapp-global-summit">IAPP Global Summit 2026: Privacy | Washington, D.C., USA</a></strong></p><p><strong>Date:</strong> March 30&#8211;April 2, 2026<br>A flagship IAPP event on digital responsibility and governance. Sessions span operational AI policy implementation, vendor risk, compliance frameworks, international enforcement trends, and practical case studies on governance program scaling. </p></li><li><p><strong><a href="https://www.humanx.co/">HumanX 2026 | San Francisco, USA</a></strong></p><p><strong>Date:</strong> April 6&#8211;9, 2026<br>One of the largest independent AI conferences globally, featuring executives, technologists, policymakers, and investors. While not exclusively governance-focused, HumanX is essential for anyone tracking the intersection of AI regulation, business adoption, and strategy, with sessions on ethical deployment, trust frameworks, and real-world impact. </p></li></ul><h3>Closing Thoughts</h3><p>Across jurisdictions, one pattern is becoming unmistakable: governance is no longer about signaling responsibility, it is about proving it. Classification lines are sharpening, supervisory coordination is strengthening, and documentation expectations are becoming operational realities. The organizations that will lead in this new phase are not those waiting for regulatory perfection, but those building AI systems designed to withstand scrutiny from day one. As enforcement moves from principle to practice, one question should sit at the centre of every AI strategy discussion: if a regulator asked you tomorrow to justify your system&#8217;s risk classification, could you prove it?</p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #11 • Jan 27 2026 South Korea’s AI Basic Act Sets a New Standard for Enforceable AI Oversight]]></title><description><![CDATA[Welcome to the 11th issue of AI Governance Today. If last week was defined by the first wave of deepfake enforcement, this week marks a decisive shift from "what is the AI saying?" to "what is the AI doing?" As we cross into late January, the era of theoretical risk has ended.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-11-jan</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-11-jan</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 27 Jan 2026 15:02:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NtAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xRCJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the <strong>11th issue of AI Governance Today</strong>. If last week was defined by the first wave of deepfake enforcement, this week marks a decisive shift from "what is the AI saying?" to "what is the AI <em>doing</em>?" As we cross into late January, the era of theoretical risk has ended. With South Korea enacting the world&#8217;s first comprehensive AI framework and Finland igniting the first national sanctions board under the EU AI Act, regulators are no longer waiting for accidents to happen before stepping in. We are moving into a period of operational duty of care, where governance is no longer a pre-launch checklist but a real-time, auditable requirement for autonomous agents and judicial systems alike.</p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>South Korea enforces its AI Basic Act, establishing a national framework for AI governance.</p></li><li><p>The law requires high-impact AI oversight and human-in-the-loop safeguards.</p></li><li><p>Mandatory labeling of AI-generated content is enforced to prevent misrepresentation.</p></li><li><p>Transparency obligations require disclosure of AI involvement in decision-making.</p></li><li><p>Compliance includes risk management frameworks, traceable decision logs, and incident reporting.</p></li><li><p>Guidance platforms, consultation support, and government-linked incentives help organizations comply.</p></li><li><p>U.S. judges form a Judicial AI Consortium to address courtroom AI risks.</p></li><li><p>Finland activates EU AI Act enforcement for high-risk AI systems.</p></li><li><p>California&#8217;s Companion AI guardrails (SB 243) go live.</p></li><li><p>The UK formalizes the AI Safety Institute with statutory powers.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7tgO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7tgO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!7tgO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!7tgO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!7tgO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7tgO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:104528,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185949766?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7tgO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!7tgO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!7tgO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!7tgO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f5572d1-c49f-4d83-9026-2482f0a5c310_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><a href="https://aibasicact.kr/">South Korea&#8217;s AI Basic Act Marks a New Phase in Global AI Governance</a></h4><p>South Korea&#8217;s <strong>AI Basic Act</strong> came into force in January 2026, positioning the country as one of the first to implement a comprehensive national framework for artificial intelligence oversight. The legislation brings together safety, transparency, and accountability obligations under a single legal foundation and has drawn global attention precisely because it is being enforced sooner than similar frameworks elsewhere.</p><p>At its core, the Basic Act aims to balance innovation with trust. The law mandates human oversight for high-impact AI applications, enforces clear labelling requirements for AI-generated content, and introduces transparency obligations for system operators. Penalties for non-compliance can reach up to 30 million won, although authorities have provided a grace period of at least one year before fines are imposed to allow industry time to adapt.</p><p>This regulatory leap has drawn a range of responses from the domestic tech ecosystem. Larger firms with established compliance teams are better positioned to interpret and implement the new requirements, while startups and smaller players have raised concerns about vague definitions&#8212;especially around &#8220;high-impact AI&#8221;, and the potential for regulatory burden to slow innovation.</p><p>The government has responded with support measures, including planning guidance platforms and assistance centres to help organisations understand and comply with the new regime. These efforts reflect an intent to soften the transition from policy to enforcement while reinforcing the law&#8217;s foundational goals of safe and trustworthy AI.</p><p>Viewed from outside Korea, the Basic Act illustrates a practical test case for how comprehensive AI governance can be enacted in a national context. Its early implementation highlights a broader trend in which governments are experimenting with mechanisms to regulate AI rapidly, yet thoughtfully, in response to emerging social and economic risks. For the global AI governance community, South Korea&#8217;s experience offers an instructive example of the real world challenges and opportunities inherent in moving from aspiration to enforcement.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NtAV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NtAV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 424w, https://substackcdn.com/image/fetch/$s_!NtAV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 848w, https://substackcdn.com/image/fetch/$s_!NtAV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 1272w, https://substackcdn.com/image/fetch/$s_!NtAV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NtAV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png" width="1024" height="895" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:895,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1544728,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185949766?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce8f2c0b-cb79-4acc-9c6a-c9be6922ae8d_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NtAV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 424w, https://substackcdn.com/image/fetch/$s_!NtAV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 848w, https://substackcdn.com/image/fetch/$s_!NtAV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 1272w, https://substackcdn.com/image/fetch/$s_!NtAV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83b9dc37-83c4-47e4-8583-3eaa86347e5b_1024x895.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong><a href="https://fpf.org/blog/south-koreas-new-ai-framework-act-a-balancing-act-between-innovation-and-regulation/">1. South Korea Enacts World&#8217;s First Comprehensive AI Basic Act</a></strong></p><p><strong>Region:</strong> South Korea</p><p>On <strong>January 22, 2026</strong>, South Korea made history as the first nation to enforce a comprehensive, standalone AI law. The <strong>AI Basic Act</strong> creates a unified legal framework for the entire AI lifecycle. Key mandates include:</p><ul><li><p><strong>Watermarking:</strong> Compulsory labeling for all generative content to combat deepfakes.</p></li><li><p><strong>High-Impact Audits:</strong> Strict safety and reliability documentation for AI used in &#8220;essential&#8221; sectors like healthcare, finance, and energy.</p></li><li><p><strong>National Oversight:</strong> A new National AI Committee with the power to issue corrective orders and substantial fines.</p></li></ul><div><hr></div><p><strong><a href="https://www.reuters.com/legal/transactional/us-judges-form-group-tackle-pitfalls-promise-ai-2026-01-26/">2. U.S. Judges Form Consortium to Tackle Courtroom AI Risks</a></strong></p><p><strong>Region:</strong> United States</p><p>A new <strong>Judicial AI Consortium</strong>, composed of state and federal judges, has formed to establish &#8220;Rules of the Road&#8221; for AI in legal settings. Disturbed by the rise of <strong>AI hallucinations</strong> (fake case citations and fabricated evidence) in court filings, the group is developing best practices for judge-led verification. The move signals that the judiciary will no longer wait for federal legislation to protect the integrity of the record.</p><div><hr></div><p><strong><a href="https://tem.fi/en/ai-regulation">3. Finland Activates EU AI Act Enforcement Mechanisms</a></strong></p><p><strong>Region:</strong> European Union</p><p>While much of the <strong>EU AI Act</strong> remains in a staggered rollout, <strong>Finland</strong> became the first member state to activate its national Sanctions Board this week. This board is now empowered to audit &#8220;High-Risk&#8221; AI inventories. Finnish regulators have signaled they will prioritize the investigation of <strong>algorithmic bias</strong> in recruitment tools, setting the precedent for the massive fines (up to <strong>&#8364;35M</strong> or <strong>7% of revenue</strong>) allowed under the Act.</p><div><hr></div><p><strong><a href="https://sd18.senate.ca.gov/news/first-nation-ai-chatbot-safeguards-signed-law">4. California&#8217;s &#8220;Companion AI&#8221; Guardrails Go Live (SB 243)</a></strong></p><p><strong>Region:</strong> United States (State-Level)</p><p>Effective this week, California&#8217;s <strong>Companion Chatbots Act</strong> now governs AI systems designed for &#8220;relational&#8221; engagement. Developers must now provide <strong>continuous disclosure</strong> (reminding users the bot is not human during long sessions) and must implement <strong>mandatory intervention triggers</strong> for users expressing self-harm intent. For minors, &#8220;immersion interrupts&#8221; are now required to prevent emotional dependence on AI entities.</p><div><hr></div><p><strong><a href="https://www.osborneclarke.com/insights/regulatory-outlook-january-2026-artificial-intelligence">5. UK Moves to Statutory Footing for AI Safety Institute</a></strong></p><p><strong>Region:</strong> United Kingdom</p><p>Following the scrutiny of platforms like X (formerly Twitter) for deepfake generation, the UK government is moving to place the <strong>AI Safety Institute</strong> on a permanent statutory footing. This would grant the Institute formal powers to request pre-release access to the &#8220;most powerful&#8221; models for safety testing, shifting the UK from a purely &#8220;voluntary&#8221; safety regime to a more structured, enforceable model.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_DDF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_DDF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 424w, https://substackcdn.com/image/fetch/$s_!_DDF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 848w, https://substackcdn.com/image/fetch/$s_!_DDF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 1272w, https://substackcdn.com/image/fetch/$s_!_DDF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_DDF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png" width="1024" height="934" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:934,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1906001,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185949766?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb6896b51-e009-4f98-909b-100ea6872b6e_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_DDF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 424w, https://substackcdn.com/image/fetch/$s_!_DDF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 848w, https://substackcdn.com/image/fetch/$s_!_DDF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 1272w, https://substackcdn.com/image/fetch/$s_!_DDF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7f6b69e3-c229-40a0-8665-6f958f4fa1a3_1024x934.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Framework Focus</strong></h3><h4><strong><a href="https://aibasicact.kr/">South Korea AI Basic Act: Structural and Operational Deep Dive</a></strong></h4><p>South Korea&#8217;s AI Basic Act, formally the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness, provides a comprehensive governance framework for AI systems in the country. The law applies to all AI deployed or operated domestically, with a particular focus on high-impact AI applications such as healthcare, transportation, finance, public decision-making, and critical infrastructure. These systems are considered high-impact because they can affect human safety, fundamental rights, or essential services.</p><p>The Act establishes a multi-tiered institutional structure:</p><ul><li><p>The National AI Committee provides strategic guidance, coordinates national policy, and sets technical standards.</p></li><li><p>Sector-specific authorities oversee enforcement within their respective domains, ensuring compliance aligns with sectoral risk and context.</p></li></ul><p>This model balances central coordination with domain expertise, allowing authorities to evaluate AI risks in context while maintaining consistent national oversight.</p><p>Organizations deploying high-impact AI systems must implement comprehensive compliance measures including:</p><ul><li><p>Risk management frameworks covering system design, deployment, and post-deployment monitoring</p></li><li><p>Human oversight mechanisms to prevent or mitigate harm</p></li><li><p>Technical documentation and traceable decision logs for regulatory audits</p></li><li><p>Incident reporting procedures to notify authorities of system failures or adverse outcomes</p></li></ul><p>These obligations embed accountability into both the design and operational phases of AI, making compliance measurable and auditable.</p><p>Transparency obligations are also central to the framework:</p><ul><li><p>Providers must label AI-generated content to prevent misrepresentation.</p></li><li><p>Organizations must disclose AI involvement in decision-making processes that affect users.</p></li><li><p>Traceable records of system behavior must be maintained to allow regulatory and user verification.</p></li></ul><p>This focus on clarity and documentation fosters trust and ensures AI deployments can be evaluated in real-world conditions. </p><p>The Act pairs enforceable obligations with support measures:</p><ul><li><p>Administrative fines can reach 30 million won, but a grace period allows organizations time to comply.</p></li><li><p>Authorities provide guidance platforms, compliance toolkits, and consultation support.</p></li><li><p>Compliance is linked to eligibility for government-funded AI research and infrastructure programs, incentivizing adherence while promoting innovation.</p></li></ul><p>Finally, the law aligns with international AI governance standards, reflecting risk-based classification, human oversight, and transparency principles similar to the EU AI Act. South Korea&#8217;s approach demonstrates how a national framework can translate high-level principles into operational and auditable governance, providing a model for other countries implementing AI regulation.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://www.mlex.com/mlex/articles/2432419/eu-s-guidelines-for-risky-ai-systems-to-see-finalization-delay">The &#8220;High-Risk&#8221; Deadline (Feb 2, 2026)</a></strong><a href="https://www.mlex.com/mlex/articles/2432419/eu-s-guidelines-for-risky-ai-systems-to-see-finalization-delay"> </a></p><p>The European Commission was expected to release its definitive guidelines on the practical implementation of <strong>Article 6 of the EU AI Act</strong>. It could be delayed though. This is a critical milestone for any organization deploying AI in Europe, as it will finally provide the &#8220;concrete examples&#8221; of what qualifies as high-risk versus low-risk, effectively setting the compliance roadmap for the rest of the year.</p></li><li><p><strong><a href="https://iapp.org/conference/iapp-uk-intensive/agenda">IAPP UK Intensive: AI Governance - London (Feb 23&#8211;26, 2026)</a></strong><a href="https://iapp.org/conference/iapp-uk-intensive/agenda"> </a></p><p>Privacy and AI governance leaders will gather in London to deconstruct the first month of &#8220;active enforcement&#8221; under the UK&#8217;s Online Safety Act and the EU AI Act. This event will likely produce the first &#8220;lessons learned&#8221; from the Finnish and British regulatory inquiries we saw this week.</p></li><li><p><strong><a href="https://www.processexcellencenetwork.com/events-business-transformation-world-summit/agenda-mc">Agentic AI Transformation Summit - Miami (Feb 2&#8211;4, 2026)</a></strong><a href="https://www.processexcellencenetwork.com/events-business-transformation-world-summit/agenda-mc"> </a></p><p>As governance moves toward <strong>AI Agents</strong>, this summit will be the ground zero for discussing &#8220;Autonomous Accountability.&#8221; It will focus on how enterprises can move agents into production while maintaining the &#8220;human-in-the-loop&#8221; safeguards demanded by the new South Korean and California laws.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>South Korea&#8217;s AI Basic Act illustrates how quickly ambitious regulatory frameworks can move from paper to practice, creating both challenges and opportunities for organizations deploying AI. As governments worldwide follow suit, operational readiness and demonstrable safeguards are becoming the true measures of responsible AI. The question facing every AI team today is: how prepared are you to show regulators that your systems are safe, auditable, and accountable in real-world operations?</p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #10 • Jan 20 2026 AI Accountability Goes Live: Deepfakes Lead the Way]]></title><description><![CDATA[Welcome to the 10th issue of AI Governance Today. This week, one reality has become unmistakable: deepfakes have moved from a theoretical risk to a live test of AI governance programs. Regulators across the UK, U.S., and Europe are using synthetic media as a lens to evaluate whether organizations can prevent harm, detect misuse, and respond effectively once AI systems are deployed. What was once treated as a content moderation or misinformation concern is now an enforcement benchmark, signaling that]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-10-jan</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-10-jan</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 20 Jan 2026 17:00:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!xRCJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xRCJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xRCJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!xRCJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa5fdb98-62df-4467-9df7-90f294c78a20_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the <strong>10th issue of </strong><em><strong>AI Governance Today</strong></em>. This week, one reality has become unmistakable: deepfakes have moved from a theoretical risk to a <strong>live test of AI governance programs</strong>. Regulators across the UK, U.S., and Europe are using synthetic media as a lens to evaluate whether organizations can prevent harm, detect misuse, and respond effectively once AI systems are deployed. What was once treated as a content moderation or misinformation concern is now an enforcement benchmark, signaling that <strong>operational accountability, transparency, and rapid remediation</strong> are no longer optional but central to AI governance.</p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>Deepfakes are now a primary enforcement test for AI governance, shifting regulatory focus from design intentions to real-world harm, accountability, and operational proof.</p></li><li><p>UK regulators, under the Online Safety Act, and California and U.S. federal authorities are actively investigating AI platforms for non-consensual and harmful deepfake content.</p></li><li><p>Civil remedies and criminal penalties for creators, deployers, and platforms are expanding, including the DEFIANCE Act in the U.S. Senate.</p></li><li><p>Transparency, machine-readable labeling, rapid takedown procedures, and post-deployment monitoring are emerging as universal compliance expectations.</p></li><li><p>Organizations must demonstrate <strong>detectable safeguards, clear governance, and rapid response capabilities</strong>, deepfakes are the first visible benchmark for broader AI accountability.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Uuub!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Uuub!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!Uuub!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!Uuub!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!Uuub!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Uuub!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:514678,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Uuub!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!Uuub!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!Uuub!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!Uuub!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F140de3cd-fdd4-45e0-8a9e-cc365c8b4a3d_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4>Deepfakes Emerge as the First Enforcement Test of AI Governance</h4><p>Over the past week, deepfakes have moved decisively from a theoretical AI risk to a practical enforcement priority. What was once treated as a content moderation or misinformation issue is now being used by regulators as a test case for whether AI governance programs work in production, not just on paper.</p><p>Across jurisdictions, regulators are converging on a shared view: deepfakes represent a direct threat to personal safety, identity, democratic processes, and platform trust. As a result, regulatory attention has shifted away from abstract model characteristics and toward harm-based accountability, focusing on non-consensual imagery, identity misuse, sexual exploitation, and deception at scale.</p><p>A defining feature of this shift is the use of existing legal authority to address live AI systems. Regulators are no longer waiting for comprehensive AI laws to come into force before acting. Instead, they are applying consumer protection, online safety, privacy, and criminal statutes to synthetic media harms, particularly where safeguards were foreseeable but absent or ineffective. This approach lowers the enforcement threshold and accelerates accountability timelines for companies deploying generative systems.</p><p>Transparency has emerged as a core regulatory expectation. Many jurisdictions now require clear disclosure when content is artificially generated or manipulated, often extending beyond visible labels to include machine-readable markers that enable detection downstream. These obligations apply not only to malicious deepfakes, but also to synthetic media used in entertainment, marketing, or artistic contexts, reinforcing the principle that realism triggers responsibility regardless of intent.</p><p>Platform responsibility is another unifying theme. Rather than focusing solely on creators, regulators are placing increasing obligations on services that host, distribute, or amplify deepfake content. Expectations include rapid takedown mechanisms, proactive risk assessments, monitoring for abuse patterns, and documented escalation procedures. Failure to act after notice is increasingly treated as a governance failure rather than a technical oversight.</p><p>In parallel, civil liability and criminal penalties are expanding. Victims of non-consensual deepfakes are gaining clearer legal pathways to seek damages, while creators and distributors face escalating penalties when synthetic media is used for harassment, exploitation, fraud, or political manipulation. These measures reflect a broader policy judgment that deepfakes undermine fundamental rights tied to identity, consent, and dignity.</p><p>What makes deepfakes particularly significant is their role as a <strong>regulatory forcing function</strong>. They expose gaps in post-deployment monitoring, challenge assumptions about user behavior, and test whether governance frameworks can respond dynamically once systems are in the wild. As a result, enforcement actions tied to deepfakes are implicitly evaluating incident response readiness, internal accountability structures, and the ability to demonstrate control under scrutiny.</p><p>The broader implication is clear: deepfakes are not an isolated content issue, but the first widely visible enforcement benchmark for AI governance. Regulators are signaling that future scrutiny, whether related to discrimination, manipulation, or unsafe automation, will follow a similar pattern, prioritizing real-world harm and operational proof over aspirational principles.</p><p>For organizations deploying generative AI, the lesson is straightforward. Effective governance now requires detectable safeguards, enforceable policies, and the ability to intervene quickly and document decisions. Deepfake enforcement is not the end of the regulatory story, it is the opening chapter in how AI accountability will be judged going forward.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tKVo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tKVo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 424w, https://substackcdn.com/image/fetch/$s_!tKVo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 848w, https://substackcdn.com/image/fetch/$s_!tKVo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 1272w, https://substackcdn.com/image/fetch/$s_!tKVo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tKVo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png" width="1456" height="600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:600,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:259363,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tKVo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 424w, https://substackcdn.com/image/fetch/$s_!tKVo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 848w, https://substackcdn.com/image/fetch/$s_!tKVo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 1272w, https://substackcdn.com/image/fetch/$s_!tKVo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F568880b1-6446-42b6-9d19-8bb2d3a73a7e_1674x690.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: <a href="https://www.realitydefender.com/insights/the-state-of-deepfake-regulations">Reality Defender</a></figcaption></figure></div><p></p><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong><a href="https://www.reuters.com/technology/uk-pm-starmer-says-musks-x-moves-comply-with-uk-law-2026-01-14/">1. UK Deepfake Enforcement and Regulatory Action Accelerates</a></strong></p><p><strong>Region:</strong> United Kingdom<br>UK Prime Minister Keir Starmer confirmed that regulator Ofcom is actively probing AI platform <em>X</em> (formerly Twitter) for allegedly generating sexually explicit deepfake content via its Grok chatbot. This follows new legislation criminalizing non-consensual deepfake image creation, slated to take effect imminently under the Online Safety Act framework. The government has emphasized enforcement readiness and indicated additional measures if platforms fail to comply with UK legal standards on abusive AI content.</p><div><hr></div><p><strong><a href="https://www.theguardian.com/technology/2026/jan/14/california-attorney-general-investigates-grok-ai-elon-musk?">2. California Attorney General Opens AI Misuse Investigation</a></strong></p><p><strong>Region:</strong> United States (State-Level)<br>California&#8217;s Attorney General has launched an investigation into xAI&#8217;s Grok tool over allegations that it enabled generation of non-consensual sexually explicit deepfake images, including content involving minors. The inquiry reflects heightened regulator scrutiny on harmful AI outputs and abuse vectors, with bipartisan political figures condemning the tool&#8217;s capabilities and calling for broader enforcement action. International regulators in Europe and Asia are also initiating inquiries into related conduct.</p><div><hr></div><p><strong><a href="https://www.theverge.com/news/861531/defiance-act-senate-passage-deepfakes-grok">3. U.S. Senate Passes Deepfake Civil Remedies Bill (DEFIANCE Act)</a></strong></p><p><strong>Region:</strong> United States <br>The U.S. Senate unanimously passed the DEFIANCE Act, designed to allow victims of non-consensual AI-generated intimate imagery to pursue civil damages against creators and deployers of such content. The bill responds to high-profile deepfake concerns and aims to expand legal recourse beyond platform removal requirements to include individual accountability. The measure now awaits action in the House of Representatives.</p><div><hr></div><p><strong>4. California Advances AI Transparency and Safety Enforcement in 2026 </strong></p><p><strong>Region:</strong> United States (State-Level)<br>California continues to implement its AI regulatory agenda in 2026, including enforcement of the Transparency in Frontier Artificial Intelligence Act (SB 53). This law requires large AI developers to publish safety reports, adhere to transparency obligations, and offer whistleblower protections. As the state positions itself at the forefront of operational AI governance, legislators are also debating additional oversight measures connected to data center energy use and broader AI impacts on infrastructure and employment.</p><div><hr></div><p><strong>5. EU Digital Enforcement Posture Strengthens Around AI Outputs</strong></p><p><strong>Region:</strong> European Union<br>Although not tied to a specific <em>news article</em> last week, regulatory pressure within the EU also featured prominently in enforcement discourse: the European Commission has been reviewing AI provider compliance with existing frameworks such as the Digital Services Act. Commission actions include fines and heightened investigatory scrutiny of platforms that fail to meet transparency and safety obligations, demonstrating that EU enforcement mechanisms are being applied to generative AI models alongside emerging national laws. (Context from regulatory tracking.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NgIm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NgIm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!NgIm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!NgIm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!NgIm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NgIm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/efe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2633095,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/185196042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NgIm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!NgIm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!NgIm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!NgIm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fefe38ae8-0b49-482c-8a65-50ab6eecc0aa_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Framework Focus</strong></h3><h4><strong><a href="https://www.legislation.gov.uk/ukpga/2023/50">UK Online Safety Act &#8211; Deepfake Governance and Enforcement</a></strong></h4><p>The UK Online Safety Act (OSA) has emerged as one of the first operationally enforceable frameworks addressing AI-generated deepfakes, particularly non-consensual and sexually explicit synthetic media. Unlike abstract AI legislation, the OSA emphasizes <strong>real-world outcomes</strong>, holding platforms accountable for the creation, hosting, or distribution of harmful content, whether AI-generated or human-produced. This makes it a practical blueprint for organizations seeking to demonstrate effective governance under regulatory scrutiny.</p><h3>Scope and Applicability</h3><p>The Act applies to:</p><ul><li><p><strong>Online platforms and services</strong> that host, distribute, or recommend user-generated content, including social media, marketplaces, and messaging platforms.</p></li><li><p><strong>AI-generated content</strong> that impersonates individuals, produces non-consensual sexual imagery, or otherwise exposes users to identity-based or reputational harm.</p></li><li><p><strong>Senior leadership accountability</strong>, requiring organizations to designate responsible officers for content safety and harm mitigation.</p></li></ul><p>The OSA explicitly extends to synthetic media, treating AI-generated deepfakes as a <strong>priority harm category</strong>, signaling that governance programs must address both design-time safeguards and post-deployment monitoring.</p><h3>Key Governance Provisions</h3><ol><li><p><strong>Proactive Risk Assessment</strong><br>Platforms must identify potential risks associated with AI-generated content, evaluate likely harms, and implement preventive measures before deployment.</p></li><li><p><strong>Content Detection and Monitoring</strong><br>Continuous monitoring, abuse reporting mechanisms, and automated detection systems are expected to identify non-consensual or manipulative content promptly.</p></li><li><p><strong>Rapid Takedown and Remediation</strong><br>Upon notice of harmful content, platforms must act swiftly to remove or disable access. Delays or procedural gaps can constitute a governance failure under the Act.</p></li><li><p><strong>Operational Accountability</strong><br>The Act reinforces the need for clear internal ownership, defined escalation protocols, and documented decision-making authority for managing AI-related harm.</p></li><li><p><strong>Documentation and Auditability</strong><br>Organizations are required to maintain auditable records demonstrating compliance with safety obligations, including risk assessments, mitigation steps, and incident responses.</p></li></ol><h3>Governance Implications</h3><p>The OSA demonstrates the regulatory shift from ethical intentions to <strong>operational proof</strong>. Enforcement is outcome-focused: regulators evaluate whether organisations can prevent harm, detect misuse, and intervene effectively. For AI governance teams, the key lesson is clear: having policies is insufficient. Companies must be able to <strong>show evidence of governance in action</strong>, including post-deployment controls and rapid response capability.</p><p>Deepfakes under the OSA also serve as a <strong>model for broader AI accountability</strong>. The same enforcement principles, risk identification, monitoring, rapid remediation, accountability, and traceability are likely to extend to other high-risk AI applications, including biased decision-making, manipulative content, and unsafe automation.</p><p>For organisations deploying AI, the UK Online Safety Act provides both a <strong>warning and a roadmap</strong>: governance programs are no longer judged on intent alone but on whether they can demonstrably prevent, detect, and remediate real-world harm.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong>AI Governance Forum 2026 - Amsterdam (21&#8211;22 January 2026</strong><br>Senior executives and AI risk leaders will gather to discuss practical governance strategies, operationalizing compliance, and preparing for upcoming regulatory regimes across the EU and beyond. The multi&#8209;industry agenda includes actionable frameworks for accountability and risk management in regulated environments.</p></li><li><p><strong>National AI Convention &amp; AI Leadership Conference - United States (20&#8211;21 January 2026)</strong><br>These events bring together AI practitioners and strategic leaders to explore innovation, ethical deployment, and organizational strategies for AI at enterprise scale. Sessions will cover governance implications of emerging technologies and cross&#8209;sector compliance priorities.</p></li><li><p><strong>India AI Impact Summit 2026 - New Delhi (19&#8211;20 February 2026)</strong><br>Hosted by Indian authorities with broad international participation, this summit will emphasize global standards, inclusive AI deployment, and governance frameworks that support ethical use, transparency, and accountability. India aims to build consensus on international AI norms and regulatory approach</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>As deepfakes take centre stage in AI enforcement, the message to organisations is clear: <strong>governance is measured by action and not precisely intent</strong>. Regulators are evaluating whether companies can detect misuse, remediate harm, and maintain auditable records once AI systems are live. Transparency, rapid takedown mechanisms, and clearly defined accountability structures are no longer optional; they are the baseline for compliance. Looking forward, deepfakes serve as a <strong>litmus test for broader AI oversight</strong>, from biased decision-making to unsafe automation, setting expectations for operational proof across all high-risk AI applications.</p><p><strong>Key Question:</strong> If regulators examined your AI systems tomorrow, could you demonstrate effective control, rapid response, and documented accountability in real operating conditions?</p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #09 • Jan 13 2026 AI Governance Goes Live in Texas]]></title><description><![CDATA[Welcome to the 9th issue of AI Governance Today. As we return to our usual format, the opening weeks of 2026 make one reality unmistakable: AI governance has moved from anticipation to execution. Across the United States and globally, enforceable state-level laws are taking effect, incident reporting expectations are solidifying, and accountability is shifting from abstract responsibility to operational proof. With federal and state approaches diverging and global alignment still fragmented, organizations deploying AI are increasingly being judged not by intent, but by their ability to demonstrate control, traceability, and readiness under scrutiny.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-09-jan</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-09-jan</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 13 Jan 2026 15:48:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!5Yvx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Li4D!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Li4D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!Li4D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!Li4D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!Li4D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Li4D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/184439092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Li4D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!Li4D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!Li4D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!Li4D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F12f44f06-df45-47ba-b61e-91e5debede56_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the <strong>9th issue of AI Governance Today</strong>. As we return to our usual format, the opening weeks of 2026 make one reality unmistakable: AI governance has moved from anticipation to execution. Across the United States and globally, enforceable state-level laws are taking effect, incident reporting expectations are solidifying, and accountability is shifting from abstract responsibility to operational proof. With federal and state approaches diverging and global alignment still fragmented, organizations deploying AI are increasingly being judged not by intent, but by their ability to demonstrate control, traceability, and readiness under scrutiny.</p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>Texas&#8217;s Responsible Artificial Intelligence Governance Act entering into force confirms that 2026 has begun as an enforcement year for AI regulation, not a preparatory one.</p></li><li><p>State-level AI laws in the U.S. are advancing faster than federal unification efforts, increasing compliance complexity for organizations operating across multiple jurisdictions.</p></li><li><p>Clear statutory prohibitions on harmful AI uses are replacing voluntary ethics frameworks and discretionary risk management approaches.</p></li><li><p>Regulators are prioritizing post-deployment behavior, including misuse, drift, and real-world harm, over design-time assurances.</p></li><li><p>Accountability is being defined operationally, with expectations for clear ownership, escalation authority, and intervention capability inside organizations.</p></li><li><p>Governance maturity is increasingly measured by documentation, traceability, and the ability to withstand regulatory investigation, not by stated principles.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RA4W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RA4W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!RA4W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!RA4W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!RA4W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RA4W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:335951,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/184439092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RA4W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!RA4W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!RA4W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!RA4W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb3f452ab-dc8a-4c07-b8f8-632de809e1ce_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><a href="https://www.haynesboone.com/news/publications/texas-responsible-artificial-intelligence-governance-act-what-businesses-need-to-know">Texas Responsible AI Governance Act Enters Force, Marking First Major U.S. AI Law to Be Executed in 2026</a></h4><p>On <strong>1 January 2026</strong>, the <strong>Texas Responsible Artificial Intelligence Governance Act (TRAIGA)</strong> formally entered into force, making Texas the first U.S. state this year to <strong>actively enforce a comprehensive AI governance statute</strong> rather than announce or debate one.</p><p>Unlike prior state AI laws that focused narrowly on transparency or sector-specific disclosures, TRAIGA introduces <strong>clear statutory prohibitions on harmful AI uses</strong>, enforceable transparency obligations, and centralized enforcement authority under the <strong>Texas Attorney General</strong>. With the law now live, organizations deploying AI systems that impact Texas residents are subject to <strong>immediate compliance exposure</strong>, not future rulemaking.</p><p>What makes this moment significant is execution. TRAIGA does not rely on future agency guidance to define its core obligations. Prohibited practices, such as unlawful discrimination, behavioral manipulation, facilitation of harm, and creation of illegal content, are already defined in statute. Disclosure requirements for government and healthcare AI use are also immediately applicable. Enforcement mechanisms, including civil penalties, are now available to the state.</p><p>Early legal and compliance advisories issued this month indicate that organizations are reassessing AI deployments that previously relied on internal ethical review or voluntary safeguards. The absence of a private right of action shifts the risk calculus toward <strong>state investigation readiness</strong>, documentation discipline, and defensible governance structures rather than consumer litigation management.</p><p>TRAIGA&#8217;s entry into force matters beyond Texas. It provides a <strong>working U.S. model of enforceable AI governance</strong> at a time when federal policy remains unsettled. For governance teams, the lesson is immediate: 2026 is not a transition year for AI regulation. In at least some jurisdictions, it is already an enforcement year.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5Yvx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5Yvx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 424w, https://substackcdn.com/image/fetch/$s_!5Yvx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 848w, https://substackcdn.com/image/fetch/$s_!5Yvx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!5Yvx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5Yvx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png" width="800" height="1200" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png&quot;,&quot;srcNoWatermark&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0093ee19-d30d-4fd1-b020-6107cf54a78f_800x1200.png&quot;,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1200,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:263015,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/184439092?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0093ee19-d30d-4fd1-b020-6107cf54a78f_800x1200.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5Yvx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 424w, https://substackcdn.com/image/fetch/$s_!5Yvx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 848w, https://substackcdn.com/image/fetch/$s_!5Yvx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 1272w, https://substackcdn.com/image/fetch/$s_!5Yvx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04e31a8a-bcec-4ecf-8b1f-1c2438f00163_800x1200.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"></figcaption></figure></div><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong><a href="https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/">1. New U.S. Executive Order Seeks National AI Policy Framework</a></strong></p><p><strong>Region:</strong> United States (Federal)<br>The U.S. federal government issued an executive order in December 2025 aimed at establishing a unified national AI policy framework and limiting conflicting state AI laws. The order directs federal agencies to evaluate and potentially preempt state laws deemed burdensome or inconsistent with the national policy, intensifying the debate over federal versus state authority in AI governance.</p><div><hr></div><p><strong><a href="https://www.bakerdonelson.com/2026-ai-legal-forecast-from-innovation-to-compliance">2. Texas Responsible Artificial Intelligence Governance Act Goes Into Effect</a></strong></p><p><strong>Region:</strong> United States (State-Level)<br>Effective January 1, 2026, Texas&#8217;s Responsible Artificial Intelligence Governance Act establishes a comprehensive AI governance framework that bans certain harmful AI uses (including discriminatory and harmful content), mandates disclosures for government and health-care AI deployments, and positions Texas as a leader in AI statutory regulation.</p><div><hr></div><p><strong><a href="https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption">3. California and Other State AI Laws Take Effect in 2026</a></strong></p><p><strong>Region:</strong> United States (Multi-State)<br>Several state AI laws, including California&#8217;s rules on AI transparency, chatbot disclosures, and consumer protections, and Colorado&#8217;s AI Act requiring reasonable care in high-risk systems, are now effective or phasing in during 2026, reinforcing a patchwork of enforceable operational expectations across states.</p><div><hr></div><p><strong><a href="https://www.reuters.com/business/media-telecom/spain-moves-curb-ai-deepfakes-tighten-consent-rules-images-2026-01-13/">4. Spain Advances Legislation to Curb AI Deepfakes and Non-Consensual Content</a></strong></p><p><strong>Region:</strong> European Union / Spain<br>Spain&#8217;s government approved draft legislation targeting AI-generated deepfakes and non-consensual use of images and likenesses. The bill proposes age-based consent rules and commercial use restrictions, aligning with broader EU efforts to criminalize non-consensual intimate imagery produced by AI.</p><div><hr></div><p><strong><a href="https://www.theverge.com/news/860881/uk-ai-x-grok-law-criminalizing-deepfake-nudes-ai">5. UK Passes Law Criminalizing Non-Consensual Deepfake and AI-Generated Intimate Images</a></strong></p><p><strong>Region:</strong> United Kingdom (National)<br>In direct response to widespread misuse of AI image-generation tools, the UK government has enacted legislation under the <strong>Online Safety Act</strong> designating the creation of non-consensual intimate deepfake images as a <em>priority offence</em> with proactive platform obligations and significant penalties for non-compliance. Regulators such as Ofcom are already investigating major AI platforms over alleged violations, signaling an enforcement focus on harmful AI outcomes and content moderation accountability.</p><div><hr></div><p><strong><a href="https://sg.finance.yahoo.com/news/end-voluntary-ethics-pacific-ai-141000419.html?guccounter=1&amp;guce_referrer=aHR0cHM6Ly9jaGF0Z3B0LmNvbS8&amp;guce_referrer_sig=AQAAAETQI_dfRfzwC1VWclgxDmrWtIAsHUEEmktRM56diAbTHZlHGSklNqBwoSE3gTz9kB8i5vfjdVm6O-HjIkgFRSeHg5vJDPkTNxW5lSegtp8N7yVCPAdnLNUNqGP4gm1ugy0-rS7ZglOBJOhyvyyOn6Mtosq9MhOKIJ32Dno3IiM1">6. Ongoing Proliferation of AI Laws Worldwide Surges in 2025-26</a></strong></p><p><strong>Region:</strong> Global<br>Recent reporting confirms that <strong>30+ nations and 15+ U.S. states</strong> have passed new AI laws, with a significant increase in mandatory incident reporting and statutory governance obligations, signaling a global shift from voluntary ethics to enforceable regulation.</p><h3><strong>Framework Focus</strong></h3><h4><a href="https://capitol.texas.gov/tlodocs/89R/analysis/html/HB00149S.htm">Texas Responsible Artificial Intelligence Governance Act (TRAIGA)</a></h4><p><strong>Overview</strong><br>The <strong><a href="https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law">Texas Responsible Artificial Intelligence Governance Act (HB 149)</a></strong><a href="https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law">, signed into law on June 22, 2025, and effective </a><strong><a href="https://www.lw.com/en/insights/texas-signs-responsible-ai-governance-act-into-law">1 January 2026</a></strong>, represents one of the most comprehensive state-level AI regulatory frameworks enacted in the U.S. It aims to balance <strong>consumer protections, civil rights, and innovation</strong> by establishing enforceable obligations on AI use, transparent interactions, prohibited practices, and governance structures, particularly for governmental entities and entities doing business in Texas.</p><p><strong>Scope and Applicability:</strong><br>TRAIGA applies to:</p><ul><li><p>Any entity that <strong>develops, deploys, or offers AI systems</strong> in Texas, including those whose products or services are used by Texas residents.</p></li><li><p><strong>Governmental bodies</strong> at the state and local levels, which face distinct and more rigorous obligations regarding transparency and prohibited practices.</p></li></ul><p><strong>Key Governance Provisions:</strong></p><p><strong>1. Definitions and Transparency Requirements</strong></p><ul><li><p>An <strong>&#8220;artificial intelligence system&#8221;</strong> is broadly defined as any machine-based system that uses inputs to generate outputs (e.g., content, decisions, predictions) influencing physical or virtual environments.</p></li><li><p>Government entities must provide <strong>clear, conspicuous notice</strong> to consumers when interacting with an AI system before or at the time of the interaction; healthcare providers must disclose AI involvement in patient care.</p></li></ul><p><strong>2.</strong> <strong>Prohibited AI Practices</strong><br>TRAIGA categorically prohibits the development or deployment of AI systems for:</p><ul><li><p><strong>Manipulating human behavior</strong> (e.g., self-harm encouragement, criminal conduct facilitation).</p></li><li><p><strong>Unlawful discrimination</strong> against protected classes.</p></li><li><p><strong>Creation or distribution</strong> of <strong>child sexual abuse material</strong> and unlawful deepfakes.</p></li><li><p><strong>Infringement on constitutional rights</strong>.<br>These restrictions apply broadly, including to private sector actors operating in Texas, elevating legal risk when AI systems are used for harmful or manipulative purposes.</p></li></ul><p><strong>3. Regulatory Sandbox and Innovation Support</strong><br>TRAIGA establishes a <strong>regulatory sandbox</strong> administered by the Texas Department of Information Resources to allow eligible participants to test AI systems without traditional regulatory burdens for up to 36 months. Participants must submit detailed reports on system performance, risks, mitigation activities, and stakeholder feedback.</p><p><strong>4. Texas Artificial Intelligence Council</strong><br>The Act creates a <strong>seven-member advisory council</strong> composed of experts in AI, public policy, ethics, risk management, and related domains. The Council&#8217;s mandate includes advising the legislature on AI policy, recommending improvements, identifying regulatory barriers, and publishing reports on compliance, ethical implications, and legal risks.</p><p><strong>5. Enforcement and Penalties</strong></p><ul><li><p>Enforcement authority lies with the <strong>Texas Attorney General</strong>, who can issue civil investigative demands, pursue penalties, and seek injunctive relief.</p></li><li><p>Civil penalties for violations range from <strong>$10,000 to $200,000 per violation</strong>, with daily assessments for ongoing noncompliance.</p></li><li><p>TRAIGA has <strong>no private right of action</strong>; enforcement is exclusively through state mechanisms.</p></li></ul><p><strong>Governance Implications:</strong></p><p><strong>Operational Accountability: </strong>Organizations with AI deployments in Texas must prepare internal documentation, governance processes, and <strong>proactive risk controls</strong> to demonstrate compliance during civil investigative demands. This elevates internal AI governance from voluntary best practice to enforceable legal duty.</p><p><strong>Traceability and Documentation: </strong>While TRAIGA stops short of formal lifecycle traceability mandates of some EU proposals, the regulatory sandbox and enforcement mechanisms effectively require organizations to maintain <strong>auditable records of AI design, deployment, risk mitigation, and outcomes</strong>.</p><p><strong>Sector-Focused Obligations: </strong>Healthcare and governmental use cases trigger <strong>specific disclosure requirements</strong>, signaling that regulated sectors should prioritize consumer protection and informed consent when AI is involved in decision-making or treatment.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://trustindigitallife.eu/event/ai-governance-in-2026/">AI Governance in 2026: Webinar (15 January 2026)</a></strong><br>A live webinar examining lessons from past AI governance approaches and their relevance to policy, risk management, and regulatory frameworks in 2026.</p></li><li><p><strong><a href="https://www.manatt.com/insights/webinars/navigating-ai-policy-key-takeaways-from-2025-and-what-local-leaders-should-expect-in-2026">Navigating AI Policy Webinar (21 January 2026)</a></strong><br>Panel discussion on key AI legislative and regulatory developments from 2025 and what local leaders and organizations should expect in 2026, including federal/state tensions and compliance strategies.</p></li><li><p><strong><a href="https://timesofindia.indiatimes.com/city/thiruvananthapuram/kerala-ai-future-con-on-jan-23/articleshow/126380747.cms">Kerala AI Future Con (23 January 2026)</a></strong><br>One-day summit in Kovalam, India, focusing on AI governance, public sector use, economic applications, and development strategy with international delegate participation.</p></li><li><p><strong><a href="https://aaai.org/conference/aaai/aaai-26/">AAAI 2026 Conference (20&#8211;27 January 2026)</a></strong><br>Major international AI research conference with sessions spanning AI progress, ethics, governance implications, and safety topics (Singapore).</p></li><li><p><strong><a href="https://www.axios.com/2026/01/07/watch-axios-house-davos-2026-events-day-1">Axios House at Davos 2026 (19&#8211;22 January 2026)</a></strong><br>Virtual discussions featuring themes on AI risk, innovation governance, and strategic implications for global policy and regulatory alignment.</p></li><li><p><strong><a href="https://www.complianceweek.com/webcasts/jan-22-ai-in-compliance-and-ethics-whats-working-whats-not-and-what-comes-next/36426.article">&#8220;AI in Compliance &amp; Ethics&#8221; Webinar (22 January 2026)</a></strong><br>Practical webcast unpacking how organizations are using AI in compliance and ethical contexts and emerging governance practices.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>As state-level AI laws move from passage to enforcement, 2026 is already testing whether governance programs are built for execution rather than intention. With statutes like Texas&#8217;s Responsible Artificial Intelligence Governance Act now live, accountability, documentation, and intervention authority are no longer theoretical requirements. The defining question for organizations deploying AI this year is simple: <strong>if regulators asked tomorrow, could you demonstrate control over your AI systems in real operating conditions?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #08 • Jan 06 2026 Top 10 Trends Shaping the Future of AI Policy and Governance]]></title><description><![CDATA[Welcome to the 8th issue of AI Governance Today and the first issue of 2026.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-08-jan</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-08-jan</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 06 Jan 2026 16:52:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8mPT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the <strong>8th issue of AI Governance Today</strong> and the first issue of 2026. As we begin the new year, momentum around AI governance continues to accelerate, with policy activity, regulatory expectations, and organizational accountability all moving rapidly from theory into practice. Rather than focusing on weekly regulatory updates, this edition takes a step back to reflect on the bigger picture. <strong>This week we digress from our usual newsletter format and take a deeper look at the top ten developments that are likely to shape AI policy and governance in 2026.</strong> These trends highlight how expectations are shifting from principles to implementation and from intention to verifiable evidence of control. <strong>I wish you a very Happy New Year and hope 2026 brings you health, clarity, and meaningful progress in your work.</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8mPT!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8mPT!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!8mPT!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!8mPT!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!8mPT!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8mPT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:433876,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/183686984?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8mPT!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!8mPT!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!8mPT!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!8mPT!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F426196ab-fc1f-4281-b3a4-e0c20baf0572_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>Top 10 Trends Shaping the Future of AI Policy and Governance in 2026</h4><ol><li><p><strong>Global baseline convergence on AI safety standards:</strong><br>There is increasing movement toward alignment across jurisdictions on what constitutes safe and responsible AI. While regulatory texts differ, the underlying concepts are converging: risk-based classification, transparency obligations, documentation of training data and evaluations, incident reporting, and human oversight expectations. Forums such as the G7 Hiroshima Process, OECD AI Principles, and international standards bodies are accelerating this alignment. The practical impact is that companies building AI systems for multiple markets will begin to see more consistent expectations around governance rather than reinventing compliance structures country by country. This narrows regulatory fragmentation and reduces uncertainty.</p><p></p></li><li><p><strong>Implementation phase of the EU AI Act:</strong><br>The EU AI Act now enters its most consequential stage, where statutory language becomes operational requirements. Delegated acts will define thresholds for high-risk AI and general-purpose models. Harmonized technical standards will specify how conformity assessments are conducted. Notified bodies will be accredited to audit systems. Codes of practice will be developed for foundation models. Organizations will need to inventory AI systems, classify risk, document training processes, and be ready to demonstrate compliance throughout the lifecycle. Early enforcement actions will likely set precedents that shape industry behavior.</p><p></p></li><li><p><strong>U.S. federal regulatory action without comprehensive legislation:</strong><br>The United States may not pass a single comprehensive AI law in the near term, but governance will still advance through agency action. The FTC is addressing unfair and deceptive AI practices. The CFPB is scrutinizing credit and lending algorithms. The EEOC is focused on AI in hiring and employment decisions. HHS is examining AI in clinical decision support. The SEC is watching AI-enabled financial advice and trading. This creates a patchwork of sectoral enforcement and guidance grounded in consumer protection, discrimination, safety, and securities law. For companies, this means governance remains mandatory even without one unified statute.</p><p></p></li><li><p><strong>Formalization of AI risk management systems:</strong><br>AI governance is shifting from principle-based statements toward structured management systems. Frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework encourage organizations to manage AI risk with defined policies, assigned roles, documented processes, auditing, and continuous improvement. Boards are beginning to require oversight reporting on AI risk in the same way they do with cybersecurity and privacy. Internal governance structures such as AI risk committees and independent model review teams are becoming institutionalized. Assurance and certification regimes will increasingly allow organizations to demonstrate maturity to customers, regulators, and insurers.</p><p></p></li><li><p><strong>Evaluation, testing, and monitoring becoming mandatory:</strong><br>There is a major shift from voluntary testing toward required evaluation across the AI lifecycle. Pre-deployment activities now include safety testing, robustness checks, bias and discrimination analysis, red-teaming for misuse, and benchmarking in realistic conditions. Post-deployment expectations emphasize continuous monitoring, drift detection, incident response plans, rollback capability, and user feedback channels. This is especially relevant in healthcare, hiring, finance, education, critical infrastructure, and the public sector. An ecosystem of third-party auditors and validation tools is developing, while companies build internal model validation teams similar to those in the banking sector after the 2008 financial crisis.</p><p></p></li><li><p><strong>Governance tailored to foundation models and agentic systems:</strong><br>Different types of AI systems present different categories of risk. Foundation models trained on massive general-purpose datasets raise questions of emergent capability, dual use, and cascading downstream impact. Agentic systems that plan, act autonomously, or call external tools introduce operational and cyber-physical risk. Policymakers are beginning to differentiate between these classes. Expect obligations such as capability documentation, safeguards for high-risk functions, controlled release strategies, and stronger oversight requirements. Application-level AI that is narrow in scope will likely face proportionate but lighter regulatory treatment.</p><p></p></li><li><p><strong>Growing clarity on liability for AI-driven harm:</strong><br>As AI systems influence credit decisions, hiring outcomes, medical recommendations, transportation systems, and consumer products, defining responsibility when harm occurs is essential. Courts and lawmakers are beginning to clarify duties of care for developers, deployers, and integrators. Some contexts may lean toward strict liability while others use negligence standards based on foreseeability and control. Companies will increasingly rely on indemnity clauses, insurance products, and contractual allocation of risk. Better documentation, logging, and testing processes will become crucial because legal outcomes will hinge on whether organizations can show that they acted responsibly.</p><p></p></li><li><p><strong>Governance expectations embedded in procurement and commercial contracting:</strong><br>AI governance is rapidly becoming a prerequisite for doing business. Governments, financial institutions, hospitals, and enterprise buyers are embedding governance questions into procurement processes. Vendors are being asked to provide system cards, model documentation, dataset provenance, security controls, human oversight mechanisms, risk assessments, and incident response procedures. Companies unable to explain their systems or demonstrate controls will increasingly lose contracts. This makes governance capability a source of commercial advantage and revenue enablement, not just a compliance burden.</p><p></p></li><li><p><strong>Expansion of transparency, documentation, and incident disclosure regimes:</strong><br>Transparency expectations are expanding in both law and market practice. Requirements include documentation of training data sources where feasible, disclosure when users are interacting with AI systems, maintenance of logs for auditability, and reporting of serious incidents and safety risks. Algorithmic impact assessments are increasingly required before deploying systems that affect access to essential services or rights. These expectations encourage better record-keeping and traceability within engineering teams and enable regulators, customers, and the public to better understand AI system behavior.</p><p></p></li><li><p><strong>Professionalization of the AI governance workforce:</strong><br>AI governance is developing into a defined profession. Organizations are hiring AI risk officers, responsible AI leads, AI policy counsel, compliance engineers, and model validation specialists. Universities and training organizations are building programs specifically in AI governance and safety. Professional certifications are beginning to mirror the trajectory of privacy and cybersecurity credentials. Over time, clearer accountability structures will emerge across product management, legal, trust and safety, security, and compliance teams. Governance is becoming an operational discipline with skills, tools, and career pathways.</p></li></ol><p>As AI systems become more capable and more deeply embedded in economic and social infrastructure, governance is no longer a peripheral discussion. It is becoming central to innovation, trust, and competitiveness. The coming year will test whether institutions can translate principles into operational discipline and whether organizations can move from aspirational commitments to measurable accountability. The direction of travel is clear, but the pace and quality of implementation will depend on choices made now by policymakers, companies, and practitioners. <strong>As we step into 2026, I will leave you with this question: from the top ten trends outlined here, what do you think we missed that deserves to be on the list?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #07 • Dec 23 2025 Accountability Takes Center Stage in AI Governance]]></title><description><![CDATA[Welcome to the 7th issue of AI Governance Today. This week&#8217;s developments point to a clear shift in how AI governance is taking shape in practice. Regulation is moving faster at the state and sectoral level than at the national level, operational expectations are becoming enforceable before comprehensive frameworks are finalized, and accountability is increasingly judged by evidence rather than intent. As debates over jurisdiction continue, one message is consistent across regions. Organizations deploying AI are being expected to demonstrate control, traceability, and responsibility in real-world conditions, not just in policy statements. As the year draws to a close, I also want to wish you a Merry Christmas and a peaceful holiday season.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-07-dec</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-07-dec</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 23 Dec 2025 14:21:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c9b2f489-7ee4-4eb0-86f2-4451c58a4b09_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nMyn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nMyn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!nMyn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!nMyn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!nMyn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nMyn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:381914,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/182415895?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nMyn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!nMyn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!nMyn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!nMyn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd4d18230-16b5-47c6-b370-e8468d1d0a54_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the 7th issue of <em>AI Governance Today</em>. This week&#8217;s developments point to a clear shift in how AI governance is taking shape in practice. Regulation is moving faster at the state and sectoral level than at the national level, operational expectations are becoming enforceable before comprehensive frameworks are finalized, and accountability is increasingly judged by evidence rather than intent. As debates over jurisdiction continue, one message is consistent across regions. Organizations deploying AI are being expected to demonstrate control, traceability, and responsibility in real-world conditions, not just in policy statements. As the year draws to a close, I also want to wish you a Merry Christmas and a peaceful holiday season.</p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>State-level AI regulation is accelerating, with New York&#8217;s RAISE Act signaling a shift toward enforceable operational accountability and mandatory incident reporting.</p></li><li><p>Federal efforts to unify AI policy are colliding with state ambitions, creating a fragmented and uncertain governance landscape for organizations operating across jurisdictions.</p></li><li><p>Regulators are increasingly focused on outcomes and real-world behavior, not just stated safeguards or design-time risk assessments.</p></li><li><p>Accountability is being treated as an organizational responsibility, requiring clear ownership, escalation paths, and authority to intervene.</p></li><li><p>Traceability across the AI lifecycle is emerging as a baseline expectation for audits, enforcement, and regulatory credibility.</p></li><li><p>Governance maturity is now measured less by principles and more by the ability to produce evidence under scrutiny.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ANDv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ANDv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!ANDv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!ANDv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!ANDv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ANDv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:331939,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/182415895?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ANDv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!ANDv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!ANDv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!ANDv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bffbbb6-c0b3-4500-bb3a-0e897dd5b72f_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>State-Level AI Safety Law Signals New Phase in U.S. AI Governance</strong></h4><p>This week&#8217;s <a href="https://www.axios.com/2025/12/19/new-york-ai-safety-bill-hochul">signing of the </a><strong><a href="https://www.axios.com/2025/12/19/new-york-ai-safety-bill-hochul">RAISE Act in New York</a></strong> represents a meaningful shift in how AI safety and accountability are being approached at the state level, and signals broader implications for national governance. On <strong>19 December 2025</strong>, New York Governor Kathy Hochul enacted a comprehensive state-level AI safety law that includes mandatory <strong>incident reporting requirements and risk mitigation measures</strong> for advanced AI systems. Unlike past legislation focused narrowly on specific categories of AI use, this law is explicitly designed to create ongoing oversight mechanisms for AI systems across sectors and scales of development.</p><p>The RAISE Act&#8217;s requirements align closely with existing trends in state legislation, such as California&#8217;s AI transparency measures, but it introduces enforceable operational expectations for organizations deploying AI within the state. In doing so, New York positions itself as a governance leader while many federal initiatives remain in development or enforcement phases. </p><p>This development is particularly notable because it comes amid federal action aimed at consolidating regulatory authority over AI. Earlier this month, <a href="https://www.eversheds-sutherland.com/en/united-states/insights/trump-executive-order-targets-excessive-state-ai-laws-and-calls-for-a-national-standard-for-ai">a federal executive order sought to establish a unified </a><strong><a href="https://www.eversheds-sutherland.com/en/united-states/insights/trump-executive-order-targets-excessive-state-ai-laws-and-calls-for-a-national-standard-for-ai">national AI policy framework</a></strong><a href="https://www.eversheds-sutherland.com/en/united-states/insights/trump-executive-order-targets-excessive-state-ai-laws-and-calls-for-a-national-standard-for-ai"> </a>and to challenge state-level AI laws deemed inconsistent with that framework, intensifying the debate over regulatory authority.</p><p>The juxtaposition of New York&#8217;s aggressive state governance and federal efforts to streamline and preempt state rules illustrates the complexity of the current U.S. regulatory landscape. Organizations operating in multiple jurisdictions must now prepare for coexisting, potentially overlapping governance regimes that emphasize operational accountability, transparent reporting, and measurable safety outcomes.</p><p>For governance teams, the broader signal is clear: enforceable operational expectations are arriving before uniform federal statutes. Whether through state legislation like the RAISE Act or through evolving federal guidance and enforcement, the imperative for traceability, accountability, and incident reporting is gaining legal force.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rKBC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rKBC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 424w, https://substackcdn.com/image/fetch/$s_!rKBC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 848w, https://substackcdn.com/image/fetch/$s_!rKBC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 1272w, https://substackcdn.com/image/fetch/$s_!rKBC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rKBC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png" width="800" height="533" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:533,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:170094,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/182415895?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rKBC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 424w, https://substackcdn.com/image/fetch/$s_!rKBC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 848w, https://substackcdn.com/image/fetch/$s_!rKBC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 1272w, https://substackcdn.com/image/fetch/$s_!rKBC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0f658b3-06d8-4a8d-bc81-bfc92708ba16_800x533.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. <a href="https://www.hoganlovells.com/en/publications/white-house-issues-executive-order-to-establish-a-federal-ai-policy-and-preempt-state-laws">Federal Push Toward Unified AI Policy</a></strong><br><strong>Region: United States</strong><br>The U.S. federal government issued a new executive order aimed at establishing a centralized national AI policy framework. The order signals an intent to reduce fragmentation by limiting conflicting state-level AI regulations and reinforcing federal leadership on AI governance, safety expectations, and innovation priorities.</p><div><hr></div><p><strong>2. <a href="https://www.axios.com/2025/12/19/new-york-ai-safety-bill-hochul">New York Enacts Comprehensive AI Safety Law</a></strong><br><strong>Region: United States (State-Level)</strong><br>New York enacted a sweeping AI safety law introducing requirements around risk mitigation, incident reporting, and accountability for certain AI systems. The law reflects growing state-level momentum to regulate AI safety in the absence of comprehensive federal legislation.</p><div><hr></div><p><strong>3. <a href="https://www.myjournalcourier.com/news/article/state-regulating-emerging-ai-technology-21248368.php">States Push Back Against Federal AI Preemption</a></strong><br><strong>Region: United States (State-Level)</strong><br>Several U.S. states publicly pushed back against federal efforts to preempt state AI laws, arguing that states must retain authority to address local harms, consumer protection, and ethical AI use. This tension highlights ongoing uncertainty over the future balance between federal and state AI governance.</p><div><hr></div><p><strong>4. <a href="https://www.reuters.com/business/energy/us-energy-regulator-directs-pjm-launch-rules-ai-connections-2025-12-18/">Energy Regulators Address AI Infrastructure Risks</a></strong><br><strong>Region: United States</strong><br>U.S. energy regulators directed grid operators to develop rules governing AI-driven data center connections. This move reflects growing policy attention on the downstream infrastructure impacts of large-scale AI deployment, including energy reliability and systemic risk.</p><div><hr></div><p><strong>5. <a href="https://natlawreview.com/article/india-issues-2025-ai-governance-guidelines-how-it-compares-other-global-ai-acts">India Releases Updated AI Governance Guidelines</a></strong><br><strong>Region: India</strong><br>India issued new AI governance guidelines outlining expectations around responsible development, deployment, and oversight. The guidance signals India&#8217;s intent to formalize its AI governance posture while balancing innovation, ethics, and economic competitiveness.</p><div><hr></div><p><strong>6. <a href="https://www.unesco.org/en/articles/unesco-ai-readiness-assessment-report-anchoring-ethics-ai-governance-philippines">UNESCO Advances National AI Readiness Assessments</a></strong><br><strong>Region: Global</strong><br>UNESCO released new AI readiness assessment findings in partnership with national governments, emphasizing ethical governance, institutional capacity, and regulatory preparedness. These assessments are increasingly used as reference points for countries developing AI strategies.</p><div><hr></div><p><strong>7. <a href="https://www.tripurastarnews.com/office-of-principal-scientific-adviser-convenes-high-level-roundtable-on-techno-legal-regulation-for-responsible-innovation-aligned-ai-governance/">Governments Explore Techno-Legal AI Governance Models</a></strong><br><strong>Region: Global</strong><br>Policymakers convened high-level forums focused on techno-legal approaches to AI governance, bringing together legal, technical, and policy perspectives. These discussions reflect growing recognition that effective AI regulation requires integrated legal and technical frameworks.</p><h3><strong>Framework Focus</strong></h3><h4><strong>Accountability and Traceability as the New Control Layer</strong></h4><p>Across jurisdictions and governance regimes, a common expectation is taking shape. Organizations are increasingly expected to account for how AI systems behave and to trace how decisions are made over time.</p><p>Rather than introducing entirely new legal constructs, regulators are converging on a shared control layer built around accountability and traceability. Whether through binding regulation, voluntary frameworks, or enforcement actions, the underlying question is becoming consistent. Can an organization explain what its AI system did, why it did it, and who was responsible when it mattered?</p><p>Traceability is emerging as the technical foundation of this expectation. Regulators and auditors increasingly expect organizations to reconstruct the full lifecycle of an AI system, from data sourcing and model development to deployment decisions, system updates, and real-world outcomes. This includes the ability to show how risks were identified, how controls were applied, and how behavior was monitored and corrected after deployment.</p><p>Accountability is increasingly framed as an organizational design problem rather than a technical one. It is no longer sufficient to claim human oversight in principle. Organizations are expected to define ownership, escalation paths, and intervention authority in practice. When AI systems behave unexpectedly, ambiguity around responsibility is increasingly treated as a governance failure.</p><p>This shift applies regardless of whether a system is traditional, generative, or agentic. As AI systems gain autonomy and operate across tools and workflows, the ability to trace actions and assign responsibility becomes the primary mechanism through which governance remains possible.</p><p>Accountability and traceability are not replacing existing frameworks. They are becoming the connective tissue that determines whether those frameworks can be enforced. In the next phase of AI governance, compliance will be judged less by stated principles and more by an organization&#8217;s ability to produce credible evidence under scrutiny.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://www.complianceandrisks.com/webinar/ai-rules-are-changing-key-regulatory-updates-for-2025-2026/">Early January 2026 &#8211; AI Regulation Updates Webinar (Virtual)</a></strong><br>An expert-led webinar providing a practical overview of recent AI regulatory developments, including the EU AI Act, U.S. federal guidance, and emerging governance trends across Asia, with a focus on compliance implications for organizations.</p></li><li><p><strong><a href="https://govciomedia.com/events/">January 2026 &#8211; AI Risk &amp; Evaluation Standards Updates</a></strong><br>Several standards bodies are expected to publish updated guidance on AI evaluation, documentation, and post-deployment monitoring practices.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>As the year comes to a close, AI governance is being tested not by intention, but by execution. Accountability and traceability are becoming the measures that matter most when systems operate in the real world. </p><p><strong>As expectations rise and oversight tightens, are organizations truly prepared to show how their AI systems behave when it counts?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #06 • Dec 16 2025 AI That Acts, and the Rules That Lag Behind]]></title><description><![CDATA[Welcome to the 6th issue of AI Governance Today. This week, the focus shifts from models to systems as agentic AI moves closer to everyday deployment. With the release of the OWASP Top 10 for Agentic AI and growing attention on how autonomous systems are governed, it is becoming clear that many of the hardest AI risks ahead are not about what models generate, but about how systems behave once they are given the ability to act.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-06-dec</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-06-dec</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 16 Dec 2025 16:25:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!PmXL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/178634446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the 6th issue of <strong>AI Governance Today</strong>. This week, the focus shifts from models to systems as agentic AI moves closer to everyday deployment. With the release of the OWASP Top 10 for Agentic AI and growing attention on how autonomous systems are governed, it is becoming clear that many of the hardest AI risks ahead are not about what models generate, but about how systems behave once they are given the ability to act.</p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>The release of the OWASP Top 10 for Agentic AI marks a shift in how AI risk is understood, away from individual models and toward system-level behavior, autonomy, and control.</p></li><li><p>Agentic AI risks are largely governance failures, not model failures, driven by excessive autonomy, weak oversight, poor observability, and unclear accountability.</p></li><li><p>Regulators globally are moving toward lifecycle governance, stronger organizational controls, and traceability, even though no law yet targets agentic AI directly.</p></li><li><p>Existing frameworks like the EU AI Act, NIST AI RMF, and ISO/IEC 42001 are being stretched to cover agentic systems, but they were not designed for autonomous, multi-step behavior.</p></li><li><p>The gap between how fast agentic AI is being deployed and how slowly governance is evolving is becoming hard to ignore.</p></li><li><p>The core challenge ahead is no longer controlling AI outputs, but controlling AI behavior over time.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PmXL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PmXL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!PmXL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!PmXL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!PmXL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PmXL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:264009,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/181785117?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PmXL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!PmXL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!PmXL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!PmXL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F704de679-9908-451d-ac79-a1769bd2b2cc_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>OWASP Top 10 for Agentic AI and Why Governance Has to Catch Up</strong></h4><p>This week, the release of the <strong><a href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/">OWASP Top 10 for Agentic Applications (2026)</a></strong> felt like a turning point in how AI risk is being discussed. For a long time, most governance and security conversations focused on models. How they were trained. What data they used. Whether outputs were biased or explainable. The OWASP list makes a different point. Once AI systems start acting, planning, and interacting with other systems, the real risks move beyond the model itself.</p><p>Agentic AI systems are no longer just generating text or predictions. They decide what to do next, call tools, pass tasks to other agents, and sometimes act with very little friction. OWASP&#8217;s Top 10 reflects that reality. The risks it highlights are not about a single bad response. They are about systems behaving in ways that were not intended, not fully understood, or not properly controlled.</p><p>What stands out immediately is how many of the risks are governance problems at heart. Agent Goal Hijack, Tool Misuse, Identity and Privilege Abuse, Cascading Failures, and Rogue Agents all point to the same issue. Autonomy is being granted faster than it is being governed. In many cases, agents are trusted to act across systems without clear boundaries, strong oversight, or a reliable way to intervene when something goes wrong.</p><p>OWASP introduces the idea of Least Agency, which builds on the familiar principle of least privilege. The idea is simple but important. If an agent does not need autonomy, it should not have it. Extra agency increases the attack surface and the blast radius of failures. Alongside that, the document repeatedly emphasizes observability. If you cannot clearly see what an agent is doing, why it is doing it, and which tools it is invoking, you are already behind.</p><p>Another important shift is how familiar risks are reframed. Prompt injection becomes Agent Goal Hijack, where the attacker changes what the agent is trying to achieve rather than just its next response. Supply chain risk becomes Agentic Supply Chain Vulnerabilities, reflecting the reality that tools, agents, and configurations are often pulled in dynamically at runtime. Reliability issues show up as Cascading Failures, where one small mistake spreads quickly across agents and workflows.</p><p>What is clear throughout the document is that these risks cannot be solved by better models alone. Many of the recommended mitigations have nothing to do with training data or architecture. They are about approvals for high impact actions, clear identity boundaries for agents, policy enforcement before execution, strong logging, and the ability to pause or roll back behavior when something drifts. These are organizational and system design choices, not model tweaks.</p><p>Overall, this release feels less like a security checklist and more like a warning. As agentic AI moves into production across finance, healthcare, infrastructure, and enterprise tooling, the biggest risks will come from systems that are too autonomous, too interconnected, and not well understood by the people responsible for them. The OWASP Top 10 makes one thing very clear. The future of AI governance is not about controlling outputs. It is about controlling behavior over time.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KYkZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KYkZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 424w, https://substackcdn.com/image/fetch/$s_!KYkZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 848w, https://substackcdn.com/image/fetch/$s_!KYkZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 1272w, https://substackcdn.com/image/fetch/$s_!KYkZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KYkZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png" width="1024" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2121156,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/181785117?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad0deb95-9589-4474-af0d-4ae102ea3d83_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!KYkZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 424w, https://substackcdn.com/image/fetch/$s_!KYkZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 848w, https://substackcdn.com/image/fetch/$s_!KYkZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 1272w, https://substackcdn.com/image/fetch/$s_!KYkZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbbecaeef-0adc-4f3c-9605-31d07cec8e3f_1024x900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. <a href="https://www.kslaw.com/news-and-insights/eu-uk-ai-round-up-december-2025">Lifecycle Governance Gains Regulatory Momentum</a></strong></p><p><strong>Region: Global</strong><br>Recent regulatory updates emphasize continuous risk management across the AI lifecycle, including post-deployment monitoring, incident reporting, and periodic reassessment. As AI regulations move from adoption to implementation, regulators are reinforcing that compliance is not a one-time exercise but an ongoing operational responsibility.</p><div><hr></div><p><strong><a href="https://www.kslaw.com/news-and-insights/eu-uk-ai-round-up-december-2025">2. EU Signals Stronger Expectations on Organizational Controls</a></strong></p><p><strong>Region: European Union</strong><br>EU policymakers are increasingly framing AI obligations around organizational governance structures such as risk management systems, internal controls, documentation processes, and auditability. Recent guidance and implementation updates under the EU AI Act point toward compliance as a sustained organizational capability rather than a checklist at launch.</p><div><hr></div><p><strong><a href="https://www.theverge.com/news/842512/google-meta-openai-state-attorneys-general-ai-letter">3. U.S. Agencies Emphasize Human Oversight and Accounta</a>bility</strong></p><p><strong>Region: United States</strong><br>U.S. regulators and enforcement bodies are sharpening their focus on human responsibility in AI-assisted decision-making. Recent actions highlight expectations that organizations deploying AI remain accountable for outcomes, particularly where systems affect consumer rights, access to services, or economic opportunity.</p><div><hr></div><p><strong><a href="https://www.isaca.org/resources/news-and-trends/industry-news/2025/isoiec-42001-and-eu-ai-act-a-practical-pairing-for-ai-governance">4. Standards Bodies Accelerate AI Management System Adoption</a></strong></p><p><strong>Region: Global</strong><br>Industry adoption of AI management system standards is accelerating as organizations seek scalable governance structures aligned with emerging regulation. Recent coverage highlights growing interest in ISO/IEC-based management systems as practical tools to operationalize AI governance across jurisdictions and business units.</p><div><hr></div><p><strong><a href="https://ajithp.com/2025/12/14/enterprise-ai-governance-framework/">5. Enterprises Face Pressure for End-to-End Traceability</a></strong></p><p><strong>Region: Global</strong><br>Organizations are facing increased scrutiny from regulators, partners, and customers to demonstrate traceability across data sourcing, model development, deployment decisions, and real-world outcomes. Recent regulatory guidance and enterprise analysis underscore traceability as a prerequisite for accountability and audit readiness.</p><div><hr></div><p><strong><a href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/">6. OWASP Releases Top 10 Risks for Agentic AI Systems</a></strong></p><p><strong>Region: Global</strong><br>The OWASP community released a new Top 10 risk list focused specifically on agentic AI systems, reflecting growing concern that autonomous and semi-autonomous AI introduces governance challenges beyond traditional model risks. The list highlights system-level issues such as excessive agency, unsafe tool use, goal misalignment, privilege escalation, and inadequate human oversight, reinforcing the shift from model-centric safety to end-to-end system governance.</p><h3><strong>Framework Focus</strong></h3><h4><strong>Agentic AI Is Moving Faster Than Governance</strong></h4><p>This week, there is no single framework or law to focus on, and that is the problem. There is still no regulation written specifically for agentic AI, even as these systems move quickly from experiments into real-world use.</p><p>Today, agentic systems are being interpreted through existing frameworks like the <strong>EU AI Act</strong>, the <strong>NIST AI Risk Management Framework</strong>, and <strong>ISO/IEC 42001</strong>. While these provide a starting point, they were not designed for systems that can plan, act, delegate, and adapt over time. Obligations around human oversight, accountability, and risk management become significantly harder to meet once autonomy is introduced.</p><p>The pace mismatch is becoming clear. Security communities like <strong>OWASP</strong> are already documenting concrete, high-impact agentic risks, while regulatory guidance remains largely indirect and reactive. Without faster, clearer rules for autonomy, permissions, and system-level control, organizations are left to make governance decisions on their own.</p><p>Agentic AI is not waiting for regulation to catch up. If governance continues to lag behind deployment, regulators will be forced to respond after failures rather than shape behavior before harm occurs.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong>18&#8211;20 December 2025 &#8211; Global Forum on AI Systems Governance (Virtual)</strong><br>A multi-stakeholder forum examining governance challenges in complex AI systems, including agentic workflows and multi-model deployments.</p></li><li><p><strong>January 2026 &#8211; AI Risk &amp; Evaluation Standards Updates</strong><br>Several standards bodies are expected to publish updated guidance on AI evaluation, documentation, and post-deployment monitoring practices.</p></li><li><p><strong>Q1 2026 &#8211; Increased Regulatory Attention on AI Operations</strong><br>Early signals suggest regulators will focus more heavily on operational governance, including incident response and internal accountability mechanisms.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>Agentic AI is forcing a quiet but important shift in how we think about governance. The risks that matter most are no longer confined to biased outputs or incorrect answers. They emerge from autonomy, from delegation, and from systems acting across tools and workflows faster than humans can comfortably follow. Security communities are already mapping these risks in concrete terms, while regulators are still adapting frameworks that were never designed for agents that plan and act on their own.</p><p>Agentic AI will not wait for governance to catch up. The question is whether governance can move fast enough to shape behavior before autonomy becomes something we can no longer meaningfully supervise.</p><p><strong>If AI systems are increasingly allowed to act on our behalf, who should be responsible when their decisions cross a line we did not intend?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #05 • Dec 9 2025 AI and Antitrust in Everyday Platforms We Use]]></title><description><![CDATA[Welcome back to AI Governance Today. This week, the center of gravity in AI regulation shifted toward the structures that shape how AI reaches people rather than the models themselves.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-05-dec</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-05-dec</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 09 Dec 2025 14:03:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/178634446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Welcome back to AI Governance Today.</strong> This week, the center of gravity in AI regulation shifted toward the structures that shape how AI reaches people rather than the models themselves. With Europe opening antitrust investigations into Google and Meta, and the U.S. moving toward a single national rulebook, AI governance is expanding beyond safety debates toward questions of platform power, access, and interoperability. At the same time, new proposals in India and fresh analysis on global standards underline a broader trend: the future of AI will be shaped as much by the economic architecture around it as by the technical frameworks designed to control it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x5ZG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x5ZG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!x5ZG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!x5ZG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!x5ZG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x5ZG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2449799,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/181109025?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x5ZG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!x5ZG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!x5ZG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!x5ZG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22f070cf-6d7d-4767-9310-1b44f9018013_1536x1024.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>The European Commission opens antitrust investigations into Google&#8217;s use of online content for AI services and Meta&#8217;s integration of a proprietary AI assistant inside WhatsApp, bringing competition law into AI governance.</p></li><li><p>The U.S. administration signals plans for a unified national AI regulatory framework that would preempt state-level AI laws, reflecting growing concern over fragmented compliance requirements.</p></li><li><p>The EU&#8217;s Digital Omnibus proposes delaying enforcement of high-risk AI obligations under the AI Act and linking compliance to the availability of harmonised technical standards.</p></li><li><p>India proposes requiring AI developers to pay royalties for locally produced copyrighted content used in model training, indicating a shift in intellectual-property norms for generative AI.</p></li><li><p>New industry analysis shows rising demand for interoperable AI safety standards, with greater alignment around frameworks such as the NIST AI RMF and ISO/IEC standards.</p></li><li><p>Regulators highlight structural risks from platform-level AI integration, signalling a shift from model-level safety concerns toward the economic dynamics that shape access, competition and innovation.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E9Ji!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E9Ji!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!E9Ji!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!E9Ji!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!E9Ji!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E9Ji!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:341448,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/181109025?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E9Ji!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!E9Ji!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!E9Ji!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!E9Ji!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5291cfcc-9be9-4935-b515-e8e917d140ff_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>AI and Antitrust: When Platform Power Shapes the Future of Governance</strong></h4><p>Two developments in Europe this week point to an emerging dimension of AI governance that goes beyond familiar debates about model transparency or ethical safeguards. Regulators are now examining how dominant digital platforms might use AI to strengthen their market positions and shape the conditions of participation for everyone else. The European Commission&#8217;s separate investigations into Google and Meta bring competition law into the center of the AI governance conversation in a way that feels directly relevant to my own experience with these products.</p><p>I have been using Google products since 2007 and WhatsApp since 2013, which means I have lived through the evolution of these platforms from utility tools into complex digital ecosystems. When I search for information, share news links, or communicate with friends in different parts of the world, I am operating within environments that now have built-in AI capabilities. The shift from passive tools to active, AI-mediated experiences is easy to miss at a day-to-day level, but it is precisely this shift that regulators are beginning to evaluate. The question they are asking is not only whether AI systems are safe, but whether the way they are deployed could reduce competition or limit the visibility of alternative services.</p><p>The investigation into Google focuses on the use of online content to train and power AI features, such as AI generated summaries in search results and new capabilities built on top of YouTube. Regulators want to understand whether the scale of Google&#8217;s access to public content gives it an advantage that others cannot realistically match and whether the current web architecture gives creators a meaningful way to control how their work is reused. This is fundamentally a governance question about value creation and value capture in a world where content is turned into training data at scale.</p><p>The inquiry into Meta centers on WhatsApp and the integration of a proprietary AI assistant, alongside potential restrictions on third-party AI services. WhatsApp has become a near default communication layer in many regions, and the introduction of AI into that layer could have direct competitive implications. If access to the messaging channel is tied to a single provider&#8217;s AI assistant, smaller firms may struggle to reach users or experiment with new types of AI based communication. The regulators are not questioning the usefulness of the technology, but the market structure that may form around it.</p><p>Both cases illustrate how AI governance is expanding from technical oversight to the economic foundations that support digital ecosystems. For those of us who use these platforms every day, this shift matters because it will influence what choices we have, how content creators are compensated, and which companies can participate in the next phase of AI development. The investigations will take time, but they raise an important question for the future of AI governance in consumer platforms: who gets to define the terms under which innovation reaches the people who use it.</p><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. Trump Announces Executive Order to Establish a Single U.S. AI Regulatory Framework</strong><br><strong>Region: </strong>United States<br>President Donald Trump signaled <a href="https://techcrunch.com/2025/12/08/one-rule-trump-says-hell-sign-an-executive-order-blocking-state-ai-laws-despite-bipartisan-pushback/">he will sign an executive order introducing a unified national AI rulebook</a>, overriding state-level AI laws and centralising regulatory authority at the federal level. The announcement reflects rising concerns over fragmented compliance requirements across multiple states, where transparency, deepfake controls, and consumer protection rules are already in force.</p><div><hr></div><p><strong>2. <a href="https://www.sidley.com/en/insights/newsupdates/2025/12/eu-digital-omnibus-the-european-commission-proposes-important-changes-to-the-eus-digital-rulebook">EU Digital Omnibus Proposes Delayed Enforcement for High-Risk AI Obligations</a></strong><br><strong>Region: </strong>European Union<br>As part of the &#8220;Digital Omnibus&#8221; reforms, the European Commission proposed extending key enforcement deadlines for the EU AI Act, linking obligations to the availability of harmonised technical standards. High-risk AI documentation and conformity assessments would shift from 2026 toward late 2027, acknowledging that regulatory infrastructure and guidance are still maturing.</p><div><hr></div><p><strong>3. EU Opens Antitrust Investigation Into Google Over AI Use of Online Content</strong><br><strong>Region: </strong>European Union<br>The European Commission launched <a href="https://www.reuters.com/sustainability/boards-policy-regulation/eu-launches-antitrust-probe-into-googles-use-online-content-ai-purposes-2025-12-09/">an antitrust investigation into Google&#8217;s use of publisher and creator content to train and power its AI services</a>, including AI-generated summaries and search outputs. Regulators are examining whether Google&#8217;s practices undermine competition or violate content-use rights. The probe highlights growing scrutiny on how large AI models access and reuse public data.</p><div><hr></div><p><strong>4. Meta Faces EU Competition Probe Over WhatsApp AI Integration Strategy</strong><br><strong>Region: </strong>European Union<br>Regulators opened a <a href="https://www.reuters.com/sustainability/boards-policy-regulation/eu-launch-antitrust-probe-into-meta-over-use-ai-whatsapp-ft-reports-2025-12-04">formal investigation into Meta&#8217;s plan to limit third-party AI chatbots on WhatsApp</a>, potentially giving exclusive access to Meta&#8217;s own AI assistant. The inquiry will assess whether the strategy could disadvantage competing AI services and restrict innovation within a dominant messaging platform.</p><div><hr></div><p><strong>5. India Proposes Royalties on Local Content Used for AI Model Training</strong><br><strong>Region: </strong>India<br>A government panel in <a href="https://timesofindia.indiatimes.com/technology/tech-news/government-panel-wants-google-and-openai-to-pay-content-creators-for-ai-training-use/articleshow/125868971.cms">India proposed requiring AI developers to pay royalties when locally produced copyrighted content is used to train large-scale models</a>. The proposal signals a shift toward revisiting intellectual-property norms for generative AI and introduces a potential remuneration model for creators whose work underpins commercial AI platforms.</p><div><hr></div><p><strong>6. New Global Analysis Highlights Rising Demand for Interoperable AI Standards</strong><br><strong>Region: </strong>Global<br>A multi-country industry analysis published this week shows <a href="https://unu.edu/macau/news/new-policy-report-interoperability-ai-safety-governance-ethics-regulations-and-standards">growing demand for harmonised AI safety and governance standards</a> to reduce compliance fragmentation. Companies operating across jurisdictions are converging around frameworks such as the NIST AI RMF and ISO/IEC standards, citing the need for consistent evaluation criteria, oversight structures and reporting expectations.</p><h3><strong>Framework Focus</strong></h3><h4><strong>Competition Law as a De-Facto AI Governance Framework</strong></h4><p>The investigations into Google and Meta this week are reminders that AI is entering a regulatory environment that did not start from zero. While attention has been fixed on new AI laws and safety frameworks, Europe is beginning to govern AI through tools that have existed for decades: competition law and platform regulation. It is a structural approach rather than a technical one, and it shows that the governance of AI can happen even before specialised AI regulations are fully implemented.</p><p>In the European context, this takes place through a combination of <strong><a href="https://eur-lex.europa.eu/eli/treaty/tfeu_2008/art_101/oj/eng">Articles 101 and 102 of the Treaty on the Functioning of the European Union (TFEU)</a></strong> and newer platform-specific rules under the <strong><a href="https://digital-markets-act.ec.europa.eu/legislation_en">Digital Markets Act (DMA)</a></strong>. The traditional antitrust provisions focus on anti-competitive agreements and abuse of dominance. The DMA complements this by setting behavioural expectations for &#8220;gatekeeper&#8221; platforms, including obligations on interoperability, access, data usage, and non-discrimination. What is notable is that these instruments are now being applied to behaviour driven by AI systems, rather than only to conventional pricing or market allocation practices.</p><p>This marks a different approach to AI governance. Instead of asking whether a model is safe or transparent, regulators are asking whether AI-mediated platform behaviour can distort markets or restrict innovation. When Meta deploys a proprietary AI assistant inside WhatsApp, the question is whether integration limits the ability of alternative AI providers to reach users. When Google trains models on vast amounts of online content, the question is whether its scale of access to data gives it a structural advantage that others cannot match. These are governance issues even if the underlying models are technically aligned with safety guidance.</p><p>The practical impact for organisations is that compliance is not only a future concern tied to AI-specific laws. It is already a present risk under existing frameworks. The DMA can require gatekeepers to provide equal access to core platform functionality, which means AI services may need to be interoperable rather than embedded in a closed ecosystem. TFEU provisions allow regulators to intervene if training data access or distribution channels create unfair competitive dynamics. In this view, AI governance is not waiting for the EU AI Act; it is emerging through the instruments that shape digital markets today.</p><p>This approach also demonstrates an important governance principle: many of the risks associated with AI are not purely technical. They are economic and structural. A highly accurate model can still raise governance concerns if it concentrates value, limits access, or changes the competitive landscape. By using competition law as a governance tool, Europe is reframing the conversation around AI from &#8220;what the model does&#8221; to &#8220;what the market becomes&#8221; once the model is deployed.</p><p>As the investigations continue, they may define how AI systems are allowed to integrate into dominant platforms, and whether distribution must remain open to a broader ecosystem of providers. It is a reminder that deep governance can happen through the instruments already on the books, not only those designed specifically for AI.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://www.iso.org/aisummit">17&#8211;19 December 2025 &#8211; International AI Standards Summit 2025 (Seoul, South Korea)</a></strong><br>A global convening of standards bodies and stakeholders, including representatives from ISO, IEC, ITU and national standards agencies, to advance international standards for AI design, governance, and interoperability.</p></li><li><p><strong><a href="https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html">19 December 2025 &#8211; ISO/IEC 42001 Implementation Workshop (ISO)</a></strong><br>A practical workshop on how organisations can adopt the new ISO/IEC 42001 AI Management System standard, with implementation guidance and early lessons from industry adoption.</p></li><li><p><strong><a href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-5e2025.pdf">22 December 2025 &#8211; NIST AI Evaluation Working Group: End-of-Year Outlook</a></strong><br>A review of NIST&#8217;s evaluation roadmap going into 2026, including updates on testing benchmarks, documentation tooling and emerging priorities for AI model evaluation.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>This week&#8217;s developments are a reminder that the governance of AI is no longer confined to model documentation or technical safeguards. It is increasingly shaped by deeper questions about access, distribution, and the invisible architecture of digital platforms. As regulators turn to competition law to guide the trajectory of AI, the focus shifts from what models can do to how the systems around them define who gets to participate in the next chapter of innovation. The challenge ahead is not only building safe technology, but ensuring that the economic structures supporting AI remain open, fair, and accountable to the public interest.</p><p>As AI becomes embedded in the tools and platforms we use every day, the question becomes harder to avoid: <strong>who should decide the terms under which innovation reaches the people who rely on it?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #04 • Dec 2 2025 Who Regulates AI? The U.S. Debate Takes Center Stage]]></title><description><![CDATA[Welcome back to AI Governance Today. This week, we focus on a different kind of shift taking place in AI governance, not in technical standards or timelines, but in the basic question of who holds regulatory authority. While Europe is adjusting its implementation pathways through the Digital Omnibus, the United States is entering a pivotal debate over whether AI oversight should be shaped nationally or preserved at the state level. Alongside this, we saw updates from Australia, a global warning from the UNDP about widening AI inequality, and new calls for harmonised safety standards across borders. Taken together, these developments show that AI governance is evolving not only through new rules, but through the structures and institutions that determine how those rules come into force.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-04-dec</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-04-dec</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 02 Dec 2025 15:03:36 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/178634446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome back to <strong>AI Governance Today</strong>. This week, we focus on a different kind of shift taking place in AI governance, not in technical standards or timelines, but in the basic question of who holds regulatory authority. While Europe is adjusting its implementation pathways through the Digital Omnibus, the United States is entering a pivotal debate over whether AI oversight should be shaped nationally or preserved at the state level. Alongside this, we saw updates from Australia, a global warning from the UNDP about widening AI inequality, and new calls for harmonised safety standards across borders. Taken together, these developments show that AI governance is evolving not only through new rules, but through the structures and institutions that determine how those rules come into force.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Bt3E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Bt3E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Bt3E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Bt3E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Bt3E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Bt3E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1587304,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/180503536?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Bt3E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Bt3E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Bt3E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Bt3E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29351484-aa93-4464-9e4b-50987bee664d_1024x1024.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>Thirty-five U.S. state attorneys general oppose federal proposals that would block state-level AI regulation, signalling a growing divide in American AI governance.</p></li><li><p>UNDP warns that rapid AI advancement may widen inequality between countries lacking infrastructure, governance capacity or regulatory readiness.</p></li><li><p>EU faces criticism over proposed changes in the Digital Omnibus that ease data-use restrictions for AI model training under legitimate interest.</p></li><li><p>The U.S. government launches &#8220;Mission Genesis,&#8221; a national AI research platform built on federal scientific datasets with new oversight requirements.</p></li><li><p>Australia publishes updated guidance for high-impact AI in public services, emphasising explainability and human oversight.</p></li><li><p>A new cross-border analysis shows rising demand for harmonised AI safety standards, with greater alignment around NIST and ISO frameworks.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tFZg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tFZg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!tFZg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!tFZg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!tFZg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tFZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:379313,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/180503536?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tFZg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!tFZg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!tFZg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!tFZg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F42d68596-fe07-4979-8bf1-5990c11031aa_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>The New Regulatory Fracture: How AI Governance Is Splitting Between U.S. States and Washington</strong></h4><p>One of the most significant developments in AI governance this week emerged from the United States, where <a href="https://www.reuters.com/legal/litigation/dozens-state-attorneys-general-urge-us-congress-not-block-ai-laws-2025-11-25/">thirty-five state attorneys general urged Congress not to restrict state-level authority over AI regulation</a>. Their letter was a response to proposed language in the National Defense Authorization Act that would preempt state AI laws and centralise regulatory power at the federal level. While procedural in form, the debate reveals a deeper shift in how AI oversight may take shape across the country.</p><p>This moment marks the early stages of a structural divide. While Europe is focused on adjusting timelines and technical standards within a single, comprehensive regulatory framework, the United States is still determining the basic distribution of regulatory power. States have moved quickly over the past two years, introducing laws on deepfakes, automated hiring, transparency duties, consumer protection and AI-enabled deception. From their perspective, local authority allows regulators to respond more effectively to the specific risks faced by their residents, especially in areas like employment, housing, public safety and children&#8217;s rights.</p><p>Federal policymakers, on the other hand, are increasingly concerned about fragmentation. They argue that a unified national framework is important for clarity, innovation and operational consistency, particularly for companies deploying AI systems across multiple states. A patchwork of divergent state laws, each with different definitions, obligations and enforcement mechanisms, could complicate compliance and slow the adoption of AI tools in critical sectors.</p><p>This tension mirrors earlier phases of U.S. technology regulation, such as the period before the emergence of comprehensive privacy laws like the CCPA or global frameworks like GDPR. The question is no longer simply how to govern AI systems but how governance authority should be shared between federal and state institutions.</p><p>For organisations operating across the United States, this is more than an abstract policy debate. It influences their risk assessments, documentation strategies, and operational planning. Companies may need to prepare for multi-jurisdictional compliance requirements, uneven enforcement timelines and potential shifts in federal legislation. The landscape may also evolve differently depending on the domain, with some areas trending toward national consistency and others, like consumer transparency or deepfake controls, remaining firmly in state hands.</p><p>As debates continue in Washington and in state capitals, the United States is entering a formative period in its AI governance trajectory. The outcome will shape not only regulatory obligations but also the degree of predictability and coherence the AI ecosystem can rely on in the years ahead.</p><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. <a href="https://www.aljazeera.com/news/2025/12/2/ai-threatens-to-widen-inequality-among-states-un">UNDP Warns That Rapid AI Adoption Could Widen Global Inequality</a></strong></p><p><strong>Region: </strong>Global<br>The United Nations Development Programme issued a warning that uneven access to AI infrastructure and governance capacity may deepen inequality between states. The report highlights risks for lower-income nations lacking investment, regulatory readiness, or computing resources.</p><div><hr></div><p><strong>2. <a href="https://www.reuters.com/legal/litigation/dozens-state-attorneys-general-urge-us-congress-not-block-ai-laws-2025-11-25">U.S. State Attorneys General Oppose Federal Plan to Block State AI Laws</a></strong></p><p><strong>Region: </strong>United States<br>A coalition of 35 state attorneys general asked Congress not to pass federal provisions that would preempt state-level AI regulation. The letter argues that states must retain authority to enforce rules on transparency, deepfakes, and AI-related harms.</p><div><hr></div><p><strong>3. E<a href="https://www.theguardian.com/world/2025/nov/19/european-commission-accused-of-massive-rollback-of-digital-protections">U Faces Backlash Over Proposed Loosening of AI Data-Use Safeguards</a></strong></p><p><strong>Region: </strong>European Union<br>Following the European Commission&#8217;s &#8220;Digital Omnibus&#8221; proposals, civil-society groups criticised amendments that expand legitimate-interest allowances for personal data used in AI training and reduce certain transparency obligations. Critics warn the changes dilute privacy protections embedded in the GDPR.</p><div><hr></div><p><strong>4. <a href="https://www.reuters.com/business/trump-aims-boost-ai-innovation-build-platform-harness-government-data-2025-11-24/">U.S. Government Launches &#8220;Mission Genesis,&#8221; a National AI Research Platform</a></strong></p><p><strong>Region: </strong>United States<br>The U.S. announced &#8220;Mission Genesis,&#8221; an AI platform integrating federal scientific datasets to support large-scale research in health, climate, energy, and infrastructure. The initiative introduces new governance and oversight requirements for federal AI research environments.</p><div><hr></div><p><strong>5. <a href="https://www.digital.gov.au/policy/ai/australian-public-service-ai-plan-2025">Australia Updates Guidance on High-Impact AI in Public Services</a></strong></p><p><strong>Region: </strong>Australia<br>Australia&#8217;s Digital Transformation Agency released new internal guidance on classifying and managing high-impact AI systems used in government services. The update stresses explainability, human oversight, and ongoing monitoring for AI used in public decision-making.</p><div><hr></div><p><strong>6. <a href="https://www.riskinfo.ai/post/ai-insights-key-global-developments-in-november-2025">New Cross-Border Analysis Shows Demand for Harmonised AI Safety Standards</a></strong></p><p><strong>Region: </strong>Global<br>A new multi-country industry analysis published this week shows growing demand for harmonised AI safety and governance standards, particularly among companies operating across jurisdictions. The report notes increasing alignment around frameworks such as the NIST AI RMF and ISO/IEC standards.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SK2c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SK2c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!SK2c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!SK2c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!SK2c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SK2c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1780466,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/180503536?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SK2c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!SK2c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!SK2c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!SK2c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ade95de-20b8-46ef-9ea1-1036ac2e0af5_1024x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Framework Focus</strong></h3><h4><strong>The EU Digital Omnibus on AI: What It Is and Why It Matters Now</strong></h4><p>We have been tracking the <a href="https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal">EU&#8217;s Digital Omnibus on AI </a>across the past two editions, so it is worth taking a clear look at what this proposal actually is and why it represents a significant moment in the rollout of the EU AI Act. Released on 19 November, the Digital Omnibus is a package of targeted amendments designed to update, refine and in some cases rebalance the regulatory architecture surrounding AI in Europe. Although positioned as a technical adjustment, the Omnibus signals a meaningful shift in how the EU is approaching both timelines and expectations for high-risk AI systems.</p><p>At its core, the Omnibus acknowledges that the original AI Act deadlines were too ambitious for regulators and industry to implement effectively. <a href="https://www.cooley.com/news/insight/2025/2025-11-24-eu-ai-act-proposed-digital-omnibus-on-ai-will-impact-businesses-ai-compliance-roadmaps">Several high-risk obligations, originally scheduled for enforcement in August 2026, are proposed to be pushed to December 2027.</a> This includes deadlines related to conformity assessments, technical documentation and the availability of harmonised standards. The extension is framed as a pragmatic step: meaningful implementation requires standards, guidance and infrastructure that are still maturing.</p><p>The Omnibus also introduces notable changes to data governance. It broadens the conditions under which personal data may be used for model training under the GDPR&#8217;s legitimate-interest basis, reducing strict consent requirements. This marks a significant recalibration of Europe&#8217;s long-standing privacy posture. Supporters see it as necessary for competitiveness and innovation. Critics argue it may weaken fundamental rights and reduce transparency around how data is used in AI systems.</p><p>Another element of the proposal is the streamlining of obligations for low-risk and general-purpose AI systems. Registration pathways and administrative requirements are simplified, reflecting a more risk-based and proportionate approach to compliance. This aligns with a broader recognition that not all AI systems carry the same societal or operational risk and should not be governed in the same way.</p><p>The Digital Omnibus represents an evolution of Europe&#8217;s approach to AI governance. Instead of a static regulatory design, the EU is shifting toward adaptive implementation, shaped by technical realities and industry capacity. For organisations preparing for AI Act compliance, the Omnibus is an important signal: governance will be phased, standards-driven and responsive to the practical challenges of deploying high-risk AI systems at scale. It is less a retreat and more a recalibration, designed to ensure that enforcement lands in a way that is both credible and workable.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://www.isaca.org/training-and-events/online-training/virtual-summits/ai-governance-strategies">3 December 2025 &#8211; AI Governance Strategies Virtual Summit (ISACA)</a></strong><br>A global online summit discussing enterprise-grade AI governance architectures, risk oversight, audit readiness, and governance for organisations deploying AI systems.</p></li><li><p><strong><a href="https://www.onetrust.com/resources/eu-digital-omnibus-explained-gdpr-ai-act-and-eprivacy-changes-webinar/">8 December 2025 &#8211; EU Digital Omnibus Explained: GDPR, AI Act &amp; ePrivacy Changes (OneTrust Webinar)</a></strong><br>An expert panel unpacking the recent EU Digital Omnibus proposals, exploring how the changes to the AI Act, GDPR, and ePrivacy may affect compliance, data use, and AI governance across Europe.</p></li><li><p><strong><a href="https://luma.com/3bahkg6y">8 December 2025 </a></strong><a href="https://luma.com/3bahkg6y">&#8211;</a><strong><a href="https://luma.com/3bahkg6y"> All Tech Is Human team for an end-of-year Responsible Tech Mixer at Kingston Hall (149 2nd Ave, New York, NY 10003).</a></strong><br>Join the <a href="https://alltechishuman.org/">All Tech Is Human community</a> for an end-of-year Responsible Tech Mixer at Kingston Hall in NYC. It&#8217;s an informal gathering to reflect on the past year, connect with practitioners across the Responsible Tech ecosystem, and celebrate the work ahead.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>This week&#8217;s signals remind us that AI governance is no longer moving along a single path. Different regions are converging on similar principles but diverging on authority, timing and implementation. Europe is refining its regulatory machinery, the United States is negotiating the boundaries of state and federal power, and global institutions are warning that governance capacity itself is becoming a new axis of inequality.</p><p>For organisations building and deploying AI, this is not just background noise. It is the environment in which systems will be evaluated, questioned and held accountable. The rules may change, but the expectation of evidence, transparency and responsibility is only growing stronger.</p><p>As AI becomes more embedded in decisions that matter, the real challenge will not be keeping up with individual regulations but understanding how these fragmented approaches interact, and what that means for trust.</p><p><strong>In a world where AI governance is being shaped by competing jurisdictions, emerging standards and shifting expectations, what does it take for an organisation to build trust that endures across all of them?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #03 • Nov 25 2025 Frameworks Are Becoming the New Infrastructure]]></title><description><![CDATA[Welcome back to AI Governance Today. Before we go for a much-needed break with family and friends later this week for Thanksgiving, we look at a shift that is quickly becoming central to responsible AI: the move from talking about principles to actually operationalising frameworks. Around the world, organisations are realising that fairness, transparency, and accountability only matter when they are translated into repeatable processes and real oversight. The City of San Jos&#233;&#8217;s adoption of the NIST AI Risk Management Framework captures this change in momentum. It reflects a broader recognition that AI governance is no longer about what we believe, but about how we implement those beliefs in practice. As AI systems become more embedded in public services and critical decisions, framework adoption is emerging as the clearest signal of maturity and the most reliable path to trustworthy deployment.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-03-nov</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-03-nov</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 25 Nov 2025 13:52:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b956b51e-28fd-4596-995c-d943b2914972_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RGRJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RGRJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!RGRJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!RGRJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!RGRJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RGRJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:377950,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/179276392?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RGRJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!RGRJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!RGRJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!RGRJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F403209a2-0699-43e2-b77a-02a041d12f08_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome back to <strong>AI Governance Today</strong>. Before we go for a much-needed break with family and friends later this week for Thanksgiving, we look at a shift that is quickly becoming central to responsible AI: the move from talking about principles to actually operationalising frameworks. Around the world, organisations are realising that fairness, transparency, and accountability only matter when they are translated into repeatable processes and real oversight. The City of San Jos&#233;&#8217;s adoption of the NIST AI Risk Management Framework captures this change in momentum. It reflects a broader recognition that AI governance is no longer about what we believe, but about how we implement those beliefs in practice. As AI systems become more embedded in public services and critical decisions, framework adoption is emerging as the clearest signal of maturity and the most reliable path to trustworthy deployment.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RWMC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RWMC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!RWMC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!RWMC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!RWMC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RWMC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1536439,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/179276392?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RWMC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!RWMC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!RWMC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!RWMC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fed4d5a18-7019-4eb7-9b2a-7cc982b3b986_1024x1024.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>San Jos&#233; becomes the first U.S. city to formally adopt the NIST AI Risk Management Framework, signalling a shift from principles to operational governance.</p></li><li><p>European Commission proposes delaying high-risk EU AI Act obligations to December 2027 as part of a broader regulatory simplification effort.</p></li><li><p>EU introduces a digital regulation simplification package, including updates to GDPR interpretations for AI training and streamlined consent processes.</p></li><li><p>Tennessee adopts a statewide AI governance framework requiring transparency, risk evaluation, and documented oversight across all state agency AI systems.</p></li><li><p>West and Central African governments emphasise structured AI governance, data management, and digital readiness at the World Bank&#8217;s regional summit.</p></li><li><p>Global analysis shows accelerating adoption of formal AI governance frameworks such as NIST AI RMF, ISO/IEC standards, and sector-specific models.</p></li><li><p>Japan updates its national AI governance strategy, advancing voluntary guidelines and targeted rules for high-impact sectors.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ePMO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ePMO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!ePMO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!ePMO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!ePMO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ePMO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:349510,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/179276392?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ePMO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!ePMO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!ePMO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!ePMO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ecf30d9-14f3-4ad0-b604-01a7a48e025e_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>Frameworks Are Becoming the New Infrastructure of AI Governance</strong></h4><p>Something important took place this month that did not make global headlines, yet it represents a quiet turning point in the evolution of public sector AI governance. <a href="https://blog.rsisecurity.com/san-jose-nist-ai-risk-management-framework/">The City of San Jos&#233; formally adopted the NIST AI Risk Management Framework</a> as the foundation for how it evaluates, deploys, and oversees artificial intelligence across its municipal services. On the surface, this sounds like a procedural decision. In reality, it signals a shift that many governments, companies, and institutions will soon have to make.</p><p>For years the AI governance landscape has been filled with principles. Every organisation has a version of the familiar triad of fairness, accountability, and transparency. These principles are valuable, but they do not answer the operational questions that determine whether a system is actually safe or trustworthy. Principles cannot tell you how to run a risk assessment, how to judge whether a dataset is appropriate for a given use case, or how to monitor a model after deployment. They offer direction, but they do not offer procedure.</p><p>This is why San Jos&#233;&#8217;s decision matters. It marks the point where an organisation shifts from asking what its values are to asking how those values will be implemented in practice. The NIST AI RMF gives city teams a common language and a shared method for doing that work. It provides a lifecycle that begins with defining context, continues through data and model evaluation, and carries into deployment, monitoring, and incident response. It replaces vague expectations with concrete actions. Most importantly, it allows different departments to govern AI in a consistent and predictable way instead of improvising their own methods.</p><p>When a city commits to a framework, the effects ripple outward. Vendors who want to work with San Jos&#233; are now expected to provide evidence about how their systems were developed and tested. Procurement teams have a structure to evaluate AI proposals instead of relying on marketing language. Legal teams understand their role in risk classification. IT and data teams can map controls to actual engineering tasks. Citizens can understand the roadmap the city follows to protect their rights and wellbeing. This is what it looks like when governance becomes a practice rather than a set of ideals.</p><p>There is also something deeper happening underneath the surface. Framework adoption closes a gap that has held back AI governance for years. Many organisations know their intentions but lack the tools to operationalise them. A framework provides the scaffolding on which good governance can be built. It turns questions like &#8220;Is this system fair?&#8221; into disciplined activities such as documenting dataset lineage, testing for disparate impact, or validating model performance under edge cases. It ensures that every new AI system enters the city&#8217;s ecosystem with a documented, repeatable, and reviewable record.</p><p>San Jos&#233; is not alone. It is simply early. Many more governments, hospitals, research institutions, and enterprises will follow this path. As AI becomes more embedded in critical services, municipalities and companies alike will need an operating system for governance. Frameworks like the AI RMF provide that foundation. They bring discipline to decision making, make risks visible, and establish a reliable pattern of oversight.</p><p>The lesson is simple and increasingly unavoidable. AI governance will not advance through principles alone. It will advance through frameworks that genuinely translate those principles into actions, responsibilities, and evidence. The organisations that adopt these frameworks early will be the ones best prepared for upcoming regulation, public expectations, and the realities of managing complex AI systems in real environments.</p><p></p><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. European Commission Proposes Delay on High-Risk AI Rules</strong></p><p><strong>Region: European Union</strong></p><ul><li><p><a href="https://www.reuters.com/sustainability/boards-policy-regulation/eu-delay-high-risk-ai-rules-until-2027-after-big-tech-pushback-2025-11-19/">The European Commission announced that key high-risk obligations under the EU AI Act will be delayed from August 2026 to December 2027.</a></p></li><li><p>The postponement is part of a broader regulatory simplification effort aimed at easing compliance for both industry and public-sector adopters.</p></li><li><p>The delay affects conformity assessments, technical documentation deadlines and oversight requirements for high-risk systems.</p></li></ul><div><hr></div><p><strong>2. EU Introduces Digital Regulation Simplification Package</strong></p><p><strong>Region: European Union</strong></p><ul><li><p>The Commission unveiled a <a href="https://www.lemonde.fr/en/economy/article/2025/11/19/european-commission-launches-digital-regulation-simplification_6747624_19.html">new digital regulation simplification package</a>.</p></li><li><p>Proposed adjustments include updated GDPR interpretations related to data use for model training and more streamlined consent processes.</p></li><li><p>The package aims to harmonise rules across data, privacy and AI governance.</p></li></ul><div><hr></div><p><strong>3. Tennessee Adopts a Statewide AI Governance Framework</strong></p><p><strong>Region: United States</strong></p><ul><li><p><a href="https://www.tn.gov/finance/news/2025/11/24/tennessee-sets-bold-course-for-ai-leadership.html">Tennessee approved its 2025 Action Plan for AI</a>, creating a formal framework for AI use in state government.</p></li><li><p>The plan outlines expectations for transparency, risk evaluation and protections for citizen rights.</p></li><li><p>State agencies must document system impacts, assess risks and maintain governance records for all AI deployments.</p></li></ul><div><hr></div><p><strong>4. World Bank Summit Highlights AI Governance Priorities in West and Central Africa</strong></p><p><strong>Region: Africa</strong></p><ul><li><p><a href="https://www.worldbank.org/en/news/statement/2025/11/18/regional-summit-on-digital-transformation-in-western-and-central-africa-cotonou-declaration">A regional digital transformation summit in Cotonou </a>focused on the role of AI frameworks in government modernisation.</p></li><li><p>Delegates emphasised data governance, infrastructure development and digital skills training as priorities.</p></li><li><p>The Cotonou Declaration encourages governments to embed structured AI governance into national digital strategies.</p></li></ul><div><hr></div><p><strong>5. Global Industry Overview Shows Rising Adoption of AI Governance Frameworks</strong></p><p><strong>Region: Global</strong></p><ul><li><p>A November global analysis highlighted the growing adoption of formal AI governance frameworks across enterprises and public institutions.</p></li><li><p><a href="https://www.riskinfo.ai/post/ai-insights-key-global-developments-in-november-2025">Organisations are increasingly aligning with the NIST AI RMF, ISO/IEC standards and sector-specific models</a>.</p></li><li><p>Framework alignment is becoming a standard expectation in procurement, risk management and compliance workflows.</p></li></ul><div><hr></div><p><strong>6. Japan Updates Its National Approach to AI Governance</strong></p><p><strong>Region: Japan</strong></p><ul><li><p><a href="https://iapp.org/resources/article/global-ai-governance-japan">Japan released updated insights on its evolving national AI governance strategy</a>.</p></li><li><p>The country continues to combine voluntary guidelines with targeted regulations in high-impact sectors such as healthcare and public administration.</p></li><li><p>Recent updates focus on promoting transparency, reliability and responsible deployment across industry and government services.</p></li></ul><h3><strong>Framework Focus</strong></h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MDM2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MDM2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 424w, https://substackcdn.com/image/fetch/$s_!MDM2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 848w, https://substackcdn.com/image/fetch/$s_!MDM2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 1272w, https://substackcdn.com/image/fetch/$s_!MDM2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MDM2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp" width="1456" height="793" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:793,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:26572,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/179276392?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MDM2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 424w, https://substackcdn.com/image/fetch/$s_!MDM2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 848w, https://substackcdn.com/image/fetch/$s_!MDM2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 1272w, https://substackcdn.com/image/fetch/$s_!MDM2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30ae9d44-48b2-4763-8ca5-dc2e780aae85_1480x806.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>Operationalising AI Governance: The NIST AI RMF in Practice</strong></p><p>As organisations expand their use of AI across public services, healthcare, finance and critical infrastructure, the real challenge is no longer identifying risks but managing them through systematic and repeatable processes. This is why the NIST AI Risk Management Framework is gaining so much traction this year. Cities like San Jos&#233; are now using it as the foundational approach for evaluating and overseeing AI systems across departments.</p><p>Across sectors, teams are moving away from high-level principles and towards lifecycle governance models that can actually be implemented. The AI RMF provides that structure. It gives organisations a common vocabulary and a practical way to translate values such as fairness, accountability and transparency into daily operational work.</p><p>The NIST AI RMF Lifecycle:</p><p><strong>1. Govern</strong></p><p>This is the foundation. Organisations define roles, responsibilities, documentation pathways and oversight requirements before any model is designed or purchased. It includes internal policies, vendor expectations and processes for recording assumptions and risks.</p><p><strong>2. Map</strong></p><p>Teams describe the context in which an AI system will be used. They document the system&#8217;s purpose, map data sources, identify affected groups and anticipate potential impacts. Mapping helps prevent a mismatch between intended use and real-world consequences.</p><p><strong>3. Measure</strong></p><p>Models are tested and validated through performance evaluation, red-teaming, robustness checks and analysis of failure modes. The focus is on producing evidence that shows how the system behaves, where it works well and where it struggles.</p><p><strong>4. Manage</strong></p><p>After deployment, systems are monitored continuously. This includes incident tracking, escalation procedures, updates based on real-world feedback and long-term performance assessment. Manage is where governance becomes a living process rather than a single approval point.</p><p><strong>Why this matters?</strong></p><p>Organisations adopting the AI RMF are discovering a clearer path from ethical intention to practical implementation. Public-sector agencies are using it to bring structure to procurement, decision-making and oversight. As AI becomes more deeply embedded in essential services, frameworks like the AI RMF are becoming essential tools for trustworthy deployment and long-term accountability.</p><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://www.onetrust.com/resources/accelerate-innovation-with-ai-governance-a-live-demo-webinar/">2 December 2025 &#8211; Defining an AI Agent Policy: Governing the Next Wave of Intelligent Systems (OneTrust)</a></strong><br>A session focused on how organisations can build AI agent policies, establish governance controls, and prepare for the operational implications of agentic workflows.</p></li><li><p><strong><a href="https://www.isaca.org/training-and-events/online-training/virtual-summits/ai-governance-strategies">3 December 2025 &#8211; AI Governance Strategies Virtual Summit (ISACA)</a></strong><br>A global virtual summit exploring enterprise AI governance models, risk controls, audit readiness, and board-level expectations for responsible AI deployment.competitive.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>AI governance is entering a phase where maturity is measured not by published principles but by the discipline of operational frameworks and the consistency of their application. As more governments and institutions adopt structured models like the NIST AI RMF, oversight becomes less about isolated approval gates and more about continuous, evidence-driven stewardship across the full lifecycle of an AI system. This evolution mirrors the technology itself: dynamic, adaptive, and deeply interconnected.</p><p>But with that progress comes a more strategic challenge. Frameworks provide structure, yet organisations must still confront the fundamental question: <em><strong>Does our level of oversight truly match the complexity and risk of the systems we are deploying?</strong></em></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #02 • Nov 18 2025 Human Oversight Is Not Optional]]></title><description><![CDATA[Welcome back to AI Governance Today. This week, we focus on a clear global shift in AI oversight. Regulators are no longer debating whether humans should remain part of critical AI decisions; that question was settled long ago. The conversation has now moved to a more urgent frontier:]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-02-nov</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-02-nov</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 18 Nov 2025 16:28:48 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TNDf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/178634446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TNDf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!TNDf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F312fb446-2881-454e-afec-552a7c128913_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome back to <strong>AI Governance Today</strong>. This week, we focus on a clear global shift in AI oversight. Regulators are no longer debating whether humans should remain part of critical AI decisions; that question was settled long ago. The conversation has now moved to a more urgent frontier: <strong>how deeply human oversight must be embedded into complex, autonomous, multi-agent systems</strong>, and what effective oversight looks like as these systems scale across sectors and jurisdictions. The world is slowly aligning around a principle aviation learned decades ago: <strong>the more capable the system becomes, the more essential the human becomes</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cxDI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cxDI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cxDI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cxDI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cxDI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cxDI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2685188,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/178634446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cxDI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!cxDI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!cxDI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!cxDI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8ec10384-0517-409a-a3c1-42a0697edf43_1536x1024.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>TL;DR For The Week</strong></h3><ul><li><p>UK introduces stronger safeguards against AI-generated CSAM and expands enforcement powers.</p></li><li><p>UK launches a new Cyber Security and Resilience Bill with implications for AI-related cyber risks and critical digital infrastructure.</p></li><li><p>Asia-Pacific judicial bodies receive training on AI, rule of law, transparency, and due-process considerations.</p></li><li><p>OECD releases a new report on AI and competitive dynamics across downstream markets.</p></li><li><p>Global survey highlights major differences in public trust toward AI regulation across regions.</p></li><li><p>Chile advances its national AI regulatory bill, adding to momentum across Latin America.</p></li><li><p>Human Oversight 2.0 emerges as a core governance theme as regulators emphasise human&#8211;AI teaming across safety-critical sectors.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8e2-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8e2-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!8e2-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!8e2-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!8e2-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8e2-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:353479,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/178634446?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8e2-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!8e2-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!8e2-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!8e2-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b04c9dc-f818-4fb2-81af-8d4b8f4773e7_2000x600.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><h4><strong>Human Oversight Is Not Optional: Lessons From Aviation for the Next Era of AI</strong></h4><p>If you have ever worked around flight systems, you learn one truth early: automation only works when humans and machines understand each other&#8217;s roles. My time in the aviation industry made that lesson impossible to ignore. Aircraft today are marvels of automation, yet every system, from flight management to engine health to collision avoidance, is designed with one assumption in mind: that human judgment will remain the final stabilising force when complexity peaks.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aigovernancetoday.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Governance Today! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Recently, that same principle resurfaced in AI governance. <a href="https://www.easa.europa.eu/en/newsroom-and-events/news/easas-first-regulatory-proposal-artificial-intelligence-aviation-now-open">EASA&#8217;s new trustworthiness proposal</a> explicitly foregrounds human and AI teaming. Across sectors, regulators are converging on a message aviation learned decades ago.</p><p><em><strong>The more capable the system becomes, the more important the human becomes.</strong></em></p><p>In aviation, automation failures rarely come from a single catastrophic event. They emerge from a drift between what the system was designed to do and what the human operator believes it is doing. Misaligned mental models, unanticipated edge cases, and opaque automation logic can turn minor anomalies into major incidents. The lesson extends directly to modern AI.</p><p>Large, distributed, multi-agent AI systems behave much like advanced avionics. They are powerful, probabilistic, context-sensitive, and unpredictable at the edges. And just like in the cockpit, the danger is rarely the AI alone; it is the gap between AI autonomy and human understanding.</p><p>This is why human oversight must evolve beyond the simplistic idea of placing a human in the loop. Oversight today must be active, contextual, risk-based, and continuous. It must recognise that humans are no longer supervising a single model, but entire pipelines of agents that communicate, coordinate, and generate downstream effects at scale.</p><p>The aviation world calls this human and automation teaming. AI governance now calls it human and AI teaming. The meaning is the same: machines handle volume and speed, humans provide judgment and accountability, and humans retain the ability to recognise when the system is confidently wrong.</p><p>As AI systems expand into courts, hospitals, airspace, supply chains, and public administration, we are entering the same phase aviation entered in the 1980s and 1990s: a shift from designing systems that humans use to designing systems that humans must understand and manage.</p><p>And if aviation taught us anything, it is this:<br><em><strong>True safety comes not from autonomy, but from partnership.</strong></em></p><p>The next era of AI governance will belong to organisations that build that partnership deliberately, with oversight, with transparency, and with a deep respect for the distinct strengths of both humans and intelligent systems.</p><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. UK Strengthens AI-Generated Child Sexual Abuse Material (CSAM) Safeguards</strong><br><strong>Region:</strong> United Kingdom</p><ul><li><p>The Internet Watch Foundation (IWF) reported that AI-generated child sexual abuse material (CSAM) cases more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. </p></li><li><p><a href="https://www.gov.uk/government/news/new-law-to-tackle-ai-child-abuse-images-at-source-as-reports-more-than-double">Under new legislation</a>, designated bodies, including AI developers and child-protection organisations, will be empowered to test AI models and ensure they cannot be misused to generate CSAM. </p></li><li><p>The legislation amendment will also broaden the ban to include creation, possession or distribution of AI models specifically designed to generate CSAM.</p></li></ul><div><hr></div><p><strong>2. UK Introduces the Cyber Security and Resilience Bill Addressing AI-Relevant Cyber-Risks</strong><br><strong>Region:</strong> United Kingdom</p><ul><li><p><a href="https://www.afcea.org/signal-media/cyber-edge/uk-proposes-laws-strengthen-cyber-defenses-public-services">The bill</a> was formally introduced in Parliament to update the UK&#8217;s cybersecurity and resilience regime (including digital infrastructure, supply chains and essential services).</p></li><li><p>Among its provisions: enhanced incident-reporting timelines, expanded regulatory powers over critical suppliers, and heavier penalties for non-compliance.</p></li><li><p>While not strictly an &#8220;AI regulation&#8221; bill, its relevance to AI arises from the interplay between AI systems, cybersecurity risks (e.g., <a href="https://genai.owasp.org/llmrisk/llm042025-data-and-model-poisoning/">model poisoning</a>, <a href="https://sedna.com/resources/what-is-supply-chain-vulnerability-and-how-do-we-assess-it">supply-chain vulnerabilities</a>) and resilience of digital services where AI is embedded.</p></li></ul><div><hr></div><p><strong>3. Regional Training on &#8220;AI &amp; the Rule of Law&#8221; in Asia-Pacific</strong><br><strong>Region:</strong> Asia-Pacific (Bangkok, Thailand)</p><ul><li><p>The UNESCO together with the United Nations Development Programme (UNDP) and the Thailand Institute of Justice (TIJ) organised a regional training event for judges, prosecutors and justice-sector officials from 11 Asian countries, to explore how AI intersects with the rule of law (e.g., bias, transparency, due process).</p></li><li><p><a href="https://www.unesco.org/en/articles/ai-and-rule-law-regional-training-justice-officials-across-asia-pacific">The event</a> emphasises that as AI tools are increasingly adopted in judicial settings (case management, legal research, etc.), governance and human-rights issues must be addressed.</p></li></ul><div><hr></div><p><strong>4. Organisation for Economic Co&#8209;operation and Development (OECD) Publishes Report on AI &amp; Competitive Dynamics</strong><br><strong>Region:</strong> Global / OECD Member Countries</p><ul><li><p>The OECD released a policy paper titled <em><a href="https://www.oecd.org/en/publications/artificial-intelligence-and-competitive-dynamics-in-downstream-markets_ccf0624a-en/full-report/component-4.html">&#8220;Artificial intelligence and competitive dynamics in downstream markets&#8221;</a></em>.</p></li><li><p>The report signals how international organisations are increasingly shaping the policy discourse on AI governance and competition frameworks.</p></li></ul><div><hr></div><p><strong>5. Global Survey on Public Trust in AI Regulation</strong><br><strong>Region:</strong> Global</p><ul><li><p><a href="https://themedialine.org/headlines/global-survey-shows-wide-gaps-in-public-trust-toward-ai-regulation-by-eu-us-and-china">A survey across 25 countries</a> found wide disparities in public confidence in how well major powers (e.g., the European Union, United States, China) can manage AI regulation. For example, respondents in Germany and Netherlands showed much higher trust in the EU than those in Latin America did.</p></li><li><p>It reflects a significant policy-signal about the political legitimacy and societal acceptance of AI governance frameworks.</p></li></ul><div><hr></div><p><strong>6. Legislative Progress on AI-Regulation in Chile</strong><br><strong>Region:</strong> Latin America (Chile)</p><ul><li><p>The lower house of Chile&#8217;s Congress advanced <a href="https://thedialogue.org/advisors/latin-america-advisor-2025-11-12">a bill to regulate artificial intelligence systems</a>, which is now moving to the Senate.</p></li><li><p>This adds to the trend of Latin American jurisdictions pursuing formal regulatory frameworks, albeit this item is &#8220;in progress&#8221; rather than fully enacted.</p></li></ul><h3><strong>Framework Focus</strong></h3><p><strong>Human Oversight 2.0: A Framework for High-Risk, Multi-Agent AI Systems</strong></p><p>As AI systems become more autonomous and increasingly deployed in aviation, justice, healthcare, and critical public services, the old idea of simply &#8220;keeping a human in the loop&#8221; is no longer sufficient. Two developments this week make that clear: EASA&#8217;s new aviation trustworthiness guidance emphasises human&#8211;AI teaming, and UNESCO&#8217;s Asia-Pacific judicial workshop highlighted the due-process challenges when AI supports legal decision-making.</p><p>Across sectors, organisations are recognising that oversight must evolve from reactive intervention to active governance across distributed, multi-agent systems.</p><p><strong>The Human Oversight Spectrum:</strong></p><p><strong>1. Human-in-the-Loop (HITL)</strong><br>A human must approve or intervene before the system takes an action. Typical in high-risk decision pipelines such as clinical judgments and adjudication tools.</p><p><strong>2. Human-on-the-Loop (HOTL)</strong><br>Real-time supervision with the ability to override system behaviour. Common in aviation, autonomous operations, and continuous monitoring systems.</p><p><strong>3. Human-over-the-Loop (HOVTL)</strong><br>Periodic monitoring informed by risk triggers, dashboards, and escalation logic. Suited to multi-agent LLM systems, adaptive workflows, and predictive pipelines.</p><p><strong>4. Human-beyond-the-Loop (HBTL)</strong><br>Humans provide long-horizon oversight such as audits, red-team reviews, post-market surveillance, and governance checkpoints for autonomous agents. Essential for systems that learn or evolve after deployment.</p><p><strong>Why it matters:</strong></p><ul><li><p>EASA&#8217;s proposed guidelines formalise human&#8211;AI teaming as a required safety layer.</p></li><li><p>Judicial bodies in Asia-Pacific are being trained on how oversight must adapt when AI participates in legal reasoning.</p></li><li><p>Multi-agent AI architectures are becoming the norm, and oversight must match that complexity.</p></li></ul><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong><a href="https://www.nacdonline.org/nacd-events/national-events/webinars/pcgls/private-company-ai-webinar-november-2025/">20 November 2025 &#8211; Technology Oversight in the Age of AI: Risks and Opportunities (NACD)</a></strong><br>Board-focused webinar on how directors should approach AI risk, governance responsibilities, and oversight of rapidly evolving AI systems from a strategic and fiduciary perspective.</p></li><li><p><strong><a href="https://www.americanbar.org/events-cle/mtg/web/454661585/">20 November 2025 &#8211; AI in Discovery: Real-World Litigation Applications (ABA</a>)</strong><br>A practical session examining how AI is transforming evidence review, e-discovery workflows, and litigation strategy through real-world case applications.</p></li><li><p><strong><a href="https://www.americanbar.org/events-cle/mtg/web/454738032/">21 November 2025 &#8211; A New Frontier of AI Readiness: How Legal Employers Are Adopting AI and What It Means for New Lawyers (ABA)</a></strong><br>A forward-looking discussion on how law firms, courts, and legal departments are integrating AI tools, and what competencies new lawyers must build to stay competitive.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>Oversight is shifting from static supervision to dynamic, continuous governance, not over a single system but across networks of interacting AI agents that learn, coordinate, and evolve over time. As organisations move from pilot deployments to scaled, production-grade AI, the challenge is no longer just adding a human checkpoint but understanding how oversight distributes across agents, interfaces, and decision pathways.</p><p>This raises a deeper and more strategic question for every organisation deploying AI today: <strong>At which level of oversight are we operating, how does that map to the actual risk profile of our systems, and is it enough for the decisions and consequences we are now responsible for?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today</strong></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aigovernancetoday.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Governance Today! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Governance Today • Issue #01 • Nov 11 2025 Europe’s AI Act Meets Its First Test]]></title><description><![CDATA[Welcome to the first issue of AI Governance Today, a weekly briefing that explains how governments, companies, and institutions are shaping the rules of artificial intelligence.]]></description><link>https://aigovernancetoday.substack.com/p/ai-governance-today-issue-01-nov</link><guid isPermaLink="false">https://aigovernancetoday.substack.com/p/ai-governance-today-issue-01-nov</guid><dc:creator><![CDATA[Anmol Kumar]]></dc:creator><pubDate>Tue, 11 Nov 2025 14:42:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!muyq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AY31!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AY31!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!AY31!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!AY31!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!AY31!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AY31!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png" width="1456" height="437" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:437,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:969310,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/177802510?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!AY31!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 424w, https://substackcdn.com/image/fetch/$s_!AY31!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 848w, https://substackcdn.com/image/fetch/$s_!AY31!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 1272w, https://substackcdn.com/image/fetch/$s_!AY31!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b524a0b-e9aa-4670-9c51-67e26f5b2d18_2000x600.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Welcome to the first issue of <strong>AI Governance Today</strong>, a weekly briefing that explains how governments, companies, and institutions are shaping the rules of artificial intelligence. Each edition tracks the global pulse of AI policy: the regulations that define boundaries, the frameworks that translate ethics into practice, and the governance models that determine how technology earns public trust. </p><p>We begin this journey at a pivotal moment. Europe&#8217;s landmark <strong>AI Act</strong>, long regarded as the world&#8217;s most comprehensive AI regulation, is facing its first real test. How it weathers this moment will shape the future of global AI governance.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aigovernancetoday.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Governance Today! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZnTU!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZnTU!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 424w, https://substackcdn.com/image/fetch/$s_!ZnTU!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 848w, https://substackcdn.com/image/fetch/$s_!ZnTU!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 1272w, https://substackcdn.com/image/fetch/$s_!ZnTU!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZnTU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png" width="728" height="412.34375" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:580,&quot;width&quot;:1024,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:1380660,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/177802510?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F813834aa-d5b0-4f26-be08-9016fc3e6692_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!ZnTU!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 424w, https://substackcdn.com/image/fetch/$s_!ZnTU!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 848w, https://substackcdn.com/image/fetch/$s_!ZnTU!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 1272w, https://substackcdn.com/image/fetch/$s_!ZnTU!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcf85c066-9bda-49b3-a403-c999dd1ac8d1_1024x580.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>TL;DR For The Week</strong></h3><ul><li><p><strong>EU&#8217;s AI Act under pressure:</strong> The European Commission may delay enforcement and soften penalties, prompting concerns that Europe&#8217;s landmark law could lose credibility before it begins.</p></li><li><p><strong>Global governance realignment:</strong> China proposed a <em>World AI Cooperation Organization</em> at APEC, signalling ambitions to shape multilateral AI rules.</p></li><li><p><strong>China&#8217;s new AI standards:</strong> Three national standards for generative AI, covering data annotation, dataset safety, and cybersecurity, came into force on 1 Nov 2025.</p></li><li><p><strong>U.S. states step up:</strong> California and New York lead with new AI transparency and safety laws as federal regulation stalls.</p></li><li><p><strong>Sector spotlight &#8211; Aviation:</strong> EASA issued new guidelines on AI trustworthiness and human-AI teaming for safety-critical systems.</p></li><li><p><strong>India&#8217;s governance push:</strong> India launched <em>AI Governance Guidelines 2025</em> (&#8220;Do No Harm&#8221;) and proposed deepfake-labeling amendments under its IT Rules.</p></li><li><p><strong>Framework focus:</strong> The new <strong>Five-Layer AI Governance Model</strong> connects regulation &#8594; standards &#8594; assurance &#8594; certification &#8594; implementation, offering a practical blueprint for operationalising AI governance ahead of 2025&#8211;26 compliance deadlines.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!muyq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!muyq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 424w, https://substackcdn.com/image/fetch/$s_!muyq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 848w, https://substackcdn.com/image/fetch/$s_!muyq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 1272w, https://substackcdn.com/image/fetch/$s_!muyq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!muyq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png" width="728" height="223.5" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:447,&quot;width&quot;:1456,&quot;resizeWidth&quot;:728,&quot;bytes&quot;:577959,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://aigovernancetoday.substack.com/i/177802510?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b51d82d-33f0-464f-aa1d-af43bb2895b5_2000x600.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!muyq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 424w, https://substackcdn.com/image/fetch/$s_!muyq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 848w, https://substackcdn.com/image/fetch/$s_!muyq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 1272w, https://substackcdn.com/image/fetch/$s_!muyq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4df4b2f1-b6a0-4ac0-ab1d-fef75cd98a2a_1956x600.png 1456w" sizes="100vw"></picture><div></div></div></a></figure></div><h3><strong>Spotlight</strong></h3><p>The European Union&#8217;s landmark <a href="https://artificialintelligenceact.eu/">Artificial Intelligence Act</a> is facing what may be its first true stress test. <a href="https://www.reuters.com/sustainability/boards-policy-regulation/big-tech-may-win-reprieve-eu-mulls-easing-ai-rules-document-shows-2025-11-07/">Reports</a> suggest that the European Commission is preparing to soften enforcement measures and offer a grace period before fines take effect. At first glance, this may appear to be a pragmatic move. Europe is under growing pressure to remain competitive as the United States, China, and other regions rapidly advance their AI capabilities. Compliance costs under the Act are significant, especially for smaller firms and open-source developers. A phased or flexible rollout could, in theory, allow innovation to continue while regulators build capacity. But there&#8217;s a deeper concern. The AI Act was meant to be the world&#8217;s first comprehensive AI law, a model of human-centric governance that balances innovation with safety, transparency, and accountability. If its enforcement is diluted before it even begins, the EU risks undermining its credibility as a global rule-setter. The message to the world would be that strong AI regulation is aspirational, not operational. Europe&#8217;s position has always been distinctive. It may not lead in the raw development of AI models, but it has long sought to lead in setting ethical and legal standards. Weakening those standards now, especially under pressure from major tech firms, could erode public trust and discourage smaller innovators who depend on a fair, predictable regulatory environment. The lesson here is not that Europe should be rigid. Rather, it should be resilient. A regulatory framework can evolve without surrendering its integrity. Enforcement flexibility can be balanced with firm commitments to transparency, oversight, and rights protection. The AI Act&#8217;s greatest strength was its moral clarity. Its next challenge will be proving that clarity can survive real-world pressure.</p><h3><strong>Policy Radar: Regional and Global Governance Shifts</strong></h3><p><strong>1. EU Rethinks AI Act Enforcement Timeline</strong><br><strong>Region:</strong> Europe</p><p>The European Commission is considering easing the timeline for enforcing provisions of the Artificial Intelligence Act following pressure from industry and U.S. officials. <a href="https://www.reuters.com/sustainability/boards-policy-regulation/big-tech-may-win-reprieve-eu-mulls-easing-ai-rules-document-shows-2025-11-07/#:~:text=The%20changes%20include%20exempting%20companies,Chee%20Editing%20by%20Gareth%20Jones">Proposed changes</a> include:</p><ul><li><p>A one-year grace period for high-risk AI systems placed on the market before the Act&#8217;s enforcement date.</p></li><li><p>Loosening registration requirements for narrowly scoped systems.</p></li><li><p>Phased enforcement of transparency rules for AI-generated content.</p></li></ul><p>These revisions are part of a broader &#8220;Digital Omnibus&#8221; draft expected to be unveiled mid-November. Final approval is still pending from the EU Parliament and member states.</p><div><hr></div><p><strong>2. China Proposes Global AI Governance Body at APEC</strong><br><strong>Region:</strong> Global (China-led)</p><p>At the APEC summit, China proposed the creation of a <a href="https://www.reuters.com/world/china/chinas-xi-pushes-global-ai-body-apec-counter-us-2025-11-01/">&#8220;World Artificial Intelligence Cooperation Organization&#8221;</a> to establish global AI rules. Framing AI as a public good, China positioned itself as a leader in multilateral AI regulation, contrasting with the U.S. preference for domestic approaches.</p><div><hr></div><p><strong>3. China Implements New Domestic AI Security Standards</strong><br><strong>Region:</strong> China<br><strong>Date Effective:</strong> November 1, 2025</p><p>China activated three new <a href="https://www.insideprivacy.com/international/china/china-releases-new-labeling-requirements-for-ai-generated-content/#:~:text=On%20March%2014%2C%202025%2C%20the,AI%20(%E2%80%9CGenAI%E2%80%9D).">national standards</a> to govern generative AI:</p><ul><li><p>Data annotation security guidelines</p></li><li><p>Dataset safety evaluation protocols for pre-training and fine-tuning</p></li><li><p>Baseline cybersecurity requirements for generative AI platforms</p></li></ul><p>These technical rules further formalize China&#8217;s risk-based approach to AI safety and ethics.</p><div><hr></div><p><strong>4. U.S. States Take the Lead in AI Legislation</strong><br><strong>Region:</strong> United States</p><p>With no comprehensive federal AI law, states like California and New York are leading the charge:</p><p><strong>California:</strong></p><ul><li><p>Enacted the <a href="https://legiscan.com/CA/text/SB53/id/3271094#:~:text=The%20TFAIA%20would%20require%20the,its%20frontier%20models%2C%20as%20prescribed.">Transparency in Frontier Artificial Intelligence Act (SB 53)</a>, requiring developers of advanced models to assess catastrophic risks and publish transparency reports.</p></li><li><p><a href="https://natlawreview.com/article/california-governor-vetoes-bill-would-have-required-employers-provide-notice-ai-use#:~:text=Related%20Practices%20&amp;%20Jurisdictions&amp;text=On%20October%2013%2C%202025%2C%20California,and%20regulations%20concerning%20such%20technology.">Vetoed the &#8220;No Robo Bosses Act&#8221; (SB 7)</a>, citing overlapping regulations and implementation concerns.</p></li><li><p>Finalized <a href="https://cppa.ca.gov/announcements/2025/20250923.html">privacy regulations</a> for automated decision-making, effective 2027, focused on notification, opt-outs, and risk assessments.</p></li></ul><p><strong>New York:</strong></p><ul><li><p>Passed the <a href="https://www.nysenate.gov/legislation/bills/2025/A6453/amendment/A">Responsible AI Safety and Education (RAISE) Act,</a> now awaiting the governor&#8217;s approval. Targets frontier models with &gt;$100M training costs.</p></li><li><p><a href="https://www.gtlaw.com/en/insights/2025/10/new-york-poised-to-be-at-the-forefront-of-ai-regulation-five-bills-await-gov-hochuls-action">Additional bills</a> address AI labeling on social media, synthetic performer disclosures in ads, AI likeness protections, and expanded oversight of state agency AI use.</p></li></ul><div><hr></div><p><strong>5. EASA Releases AI Trustworthiness Guidelines for Aviation</strong><br><strong>Region:</strong> Europe</p><p>The European Union Aviation Safety Agency (EASA) issued a <a href="https://www.easa.europa.eu/en/document-library/notices-of-proposed-amendment/npa-2025-07">Notice of Proposed Amendment for AI use in aviation</a>. The framework includes:</p><ul><li><p>Validation protocols for AI-based decision aids</p></li><li><p>Human-AI teaming requirements</p></li><li><p>Ethical and transparency safeguards for safety-critical applications</p></li></ul><p>This proposal is <a href="https://www.easa.europa.eu/en/newsroom-and-events/news/easas-first-regulatory-proposal-artificial-intelligence-aviation-now-open#:~:text=As%20part%20of%20EASA's%20AI,EU)%202024%2F1689).">now open for public consultation</a> and aligns closely with the broader EU AI Act.</p><div><hr></div><p><strong>6. India Launches AI Governance Guidelines and Tackles Deepfake Risks</strong><br><strong>Region:</strong> India</p><p>India&#8217;s Ministry of Electronics and IT (MeitY) released the &#8220;<a href="https://indiaai.s3.ap-south-1.amazonaws.com/docs/guidelines-governance.pdf">India AI Governance Guidelines</a>,&#8221; outlining a principle-based framework centered on the motto &#8220;Do No Harm.&#8221; Key features include:</p><ul><li><p>Establishing new institutions: AI Governance Group (AIGG), Technology &amp; Policy Expert Committee (TPEC), and AI Safety Institute (AISI).</p></li><li><p>Short-, medium-, and long-term actions on infrastructure, capacity, regulation, and risk.</p></li><li><p>Reliance on existing laws (IT Act, DPDP Act, etc.) for regulation, with new legislation possible in the future.</p></li><li><p>Emphasis on voluntary ethical guidelines and sector-specific regulation.</p></li></ul><p>Simultaneously, MeitY proposed amendments to the IT Rules, 2021, requiring social media platforms to label and watermark AI-generated content. Public consultation on these amendments was extended to November 13. Industry stakeholders have expressed concern about feasibility and potential impact on innovation. A full-fledged &#8220;AI Act&#8221; is being considered for Parliamentary introduction in the near future.</p><h3><strong>Framework Focus</strong></h3><p>The most pressing challenge in AI governance isn&#8217;t crafting new rules &#8211; it&#8217;s making them work in practice. A fresh framework offers a clear way for organisations and regulators to bridge that gap.</p><p><strong><a href="https://arxiv.org/abs/2509.11332">The Five-Layer AI Governance Model</a></strong><br>This model sees governance as a stack of five inter-dependent layers:</p><ol><li><p><strong>Regulation</strong> &#8211; Broad laws, statutes and policy mandates (for example the EU AI Act or NIST&#8217;s AI Risk Management Framework).</p></li><li><p><strong>Standards</strong> &#8211; Operational definitions and normative norms (for example ISO/IEC 42001, IEEE 7000) that translate regulation into technical or organisational terms.</p></li><li><p><strong>Assurance</strong> &#8211; Independent audits, conformity assessments and third-party checks of compliance with standards.</p></li><li><p><strong>Certification</strong> &#8211; Formal validation or accreditation demonstrating that an organisation or system has achieved a recognised level of compliance maturity.</p></li><li><p><strong>Implementation</strong> &#8211; The on-the-ground processes, documentation, monitoring systems, lifecycle controls and governance practices that embed all the previous layers into daily operations.</p></li></ol><p><strong>Why it matters:</strong></p><ul><li><p>It delivers a <strong>practical blueprint</strong> linking high-level rules (layer 1) to everyday operational realities (layer 5).</p></li><li><p>It highlights that good regulation or standards alone don&#8217;t guarantee effective governance &#8211; you also need assurance, certification and strong internal implementation.</p></li><li><p>With many jurisdictions moving toward compliance deadlines in 2025&#8211;2026, this stack helps organisations spot where their weakest link might be (e.g., laws in place but no assurance mechanism; good standards but poor implementation).</p></li><li><p>For practitioners designing and deploying AI systems (especially across borders, multiple agents or supply-chains) it offers a roadmap to ask: <em>Which layer are we weakest in? Where will regulators likely focus next?</em></p></li><li><p>For regulators and policy-makers it suggests that enforcement isn&#8217;t just about laws &#8211; it&#8217;s about ensuring standards are adopted, audits are credible, certifications count and firms embed the practices.</p></li></ul><h3><strong>Looking Ahead</strong></h3><ul><li><p><strong>1 January 2026</strong> &#8211; New <strong>U.S. state-level AI and data-governance laws</strong> take effect in California, Colorado, and Connecticut, introducing algorithmic-accountability and transparency requirements for companies deploying AI in consumer and employment contexts.</p></li><li><p><strong>19&#8211;20 February 2026</strong> &#8211; <em>AI Impact Summit</em>, Delhi, India<br>Global event spotlighting applied AI governance, responsible deployment, and regional collaboration on AI policy.</p></li><li><p><strong>24&#8211;26 February 2026</strong> &#8211; <em>International Association for Safe and Ethical AI (IASEAI) 2026 Conference</em>, Paris<br>A leading forum on safe and ethical AI practices convening regulators, researchers, and industry leaders to discuss implementation of global frameworks.</p></li></ul><h3><strong>Closing Thoughts</strong></h3><p>AI governance is entering a defining phase. The headlines this week, from Europe&#8217;s wavering enforcement to China&#8217;s push for global coordination and India&#8217;s principle-based approach, show that the world is no longer debating <em>whether</em> to regulate AI but <em>how</em> to do it. The balance between flexibility and credibility will determine which regions lead in shaping responsible innovation.</p><p>Rules alone are not enough. The Five-Layer Governance Model underscores that true oversight lives in the details of implementation, assurance, and certification. As new compliance timelines draw near, the real challenge for both regulators and organisations will be translating ideals into everyday practice while safeguarding trust, accountability, and human values. One question looms large: <strong>Can humanity govern intelligence that is learning faster than our ability to regulate it?</strong></p><p>Until next week,<br><em><strong>The AI Governance Today </strong></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://aigovernancetoday.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Governance Today! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>