News · Policy · Live

Policy, newest first.

Policy stories — newest first.

1

MeitY Establishes AI Policy Committee

India's MeitY established a Technology and Policy Expert Committee (TPEC) to advise its AI governance group, providing technical and policy inputs. This streamlines framework development for policy designers and offers clearer regulatory guidance for founders and procurement teams.

2

Advocates AI integration in Hollywood

Sandra Bullock advocates for Hollywood's constructive AI integration, shifting the narrative for creative directors and studio executives. This impacts procurement teams evaluating AI solutions and signals a constraint for developers: AI tools require comprehensive governance frameworks and transparency.

3

HR Strategic Shift Driven by AI, Legislation

AI and new legislation are forcing human resources to become a strategic driver, not just a support function. Organisations failing to adapt risk competitive disadvantage, requiring founders and HR leaders to integrate AI and regulatory changes into strategic planning.

4

Bullock urges cautious AI embrace in Hollywood

Bullock and Warner Bros. Co-Chair Pam Abdy reacted to AI-generated fan trailers. Bullock urged Hollywood to embrace AI cautiously, citing its potential and risks. This signals studios must define stances on AI content and manage intellectual property.

5

RBI flags AI finance risks

Uncontrolled AI deployment risks amplifying systemic vulnerabilities for financial institutions, increasing exposure to bias, data misuse, and cyber threats. Risk management teams must prioritise auditable safeguards, moving beyond vendor claims to verifiable controls.

6

UNM Students Face Conflicting AI Policies

Inconsistent AI policies in higher education create ambiguity for students, impacting their learning outcomes and preparedness for the workforce. Without explicit guidance, students must navigate conflicting expectations, potentially developing disparate skill sets compared to peers.

7

AI fluency drives employee appraisals and promotions

Companies are now directly linking employee appraisals, promotions, and salaries to AI proficiency, shifting evaluation from effort to output. This creates a skill premium for AI-fluent individuals, impacting career progression and compensation across roles.

8

Approves data centre despite protests

Imperial County approved a 950,000-square-foot data centre despite protests and an arrest, highlighting escalating community resistance to AI infrastructure. Procurement and architecture teams face increased social and political risks in site selection, necessitating comprehensive resource impact and communication strategies.

9

Employers establish AI use guidelines

AI tools are reshaping job search dynamics, driving a 3.5x increase in applications for recruiters. Job seekers must deeply personalise AI use to stand out, while hiring managers face new challenges in detecting AI-assisted deception and establishing clear AI usage policies.

10

Backs AI IP Framework for Licensing

An Adirondack Daily Enterprise editorial advocates for federal licensing and collective rights systems to protect content creators and news publishers from uncompensated AI use. This call could reshape data acquisition for AI developers and introduce new compliance requirements for procurement teams.

11

WMF Hosts International AI Summit on Governance

Increasing regulatory pressure on AI development emerges from the AI Global Summit's focus on governance. Procurement teams and legal architects face evolving compliance requirements and a fragmented cross-border operational landscape.

12

Medvi receives FDA warning letter

AI tools enable founders to scale companies to significant valuations, automating development, marketing, and customer support. This shifts unit economics for entrepreneurs, but introduces compliance risks in regulated sectors, as Medvi's FDA warning shows.

13

Napa Valley Unified Approves AI Policy

Napa Valley Unified School District approved a new AI use policy, balancing AI's potential to enhance learning and support staff with strict guidelines on ethics, equity, and academic honesty. This sets a precedent for educational institutions navigating AI integration.

14

Retains human reporters for on-the-ground reporting

The New York Post maintains human "runners" for on-the-ground reporting, a role its managing editor states is resistant to AI. This highlights the enduring value of direct human interaction for authentic information gathering, a critical constraint for tasks beyond current large language models.

15

PWCS Restricts AI Glasses Use

Prince William County Public Schools now prohibits AI-enabled glasses in schools, classifying them as restricted personal electronic devices under policy 729. This guidance, aligning with state recommendations, sets a precedent for managing wearable AI's privacy and security implications in educational environments.

16

Approves AI data centre, faces lawsuit

Imperial County approved a 950,000-square-foot AI data centre with a CEQA exemption, sparking lawsuits from the City of Imperial and residents. This sets a precedent for environmental oversight in critical infrastructure development, increasing scrutiny for procurement teams and security architects.

17

Judge blocks Pentagon's sanction on Anthropic

A US District Judge indefinitely blocked the Pentagon's designation of Anthropic as a supply chain risk. This limits the Department of Defence's ability to enforce AI use policies through punitive measures, preventing a precedent where policy disagreements could trigger severe vendor restrictions for US companies.

18

China names AI tokens 'ciyuan'

China's official naming of AI tokens as 'ciyuan' signals a strategic intent to define the foundational units of the AI economy. This positions AI compute as a quantifiable commodity, potentially shifting how platform engineers and procurement teams evaluate and acquire AI services.

19

Southeast Asia embraces nuclear power for AI

Southeast Asian nations revive nuclear power plans to meet AI data centre energy demand and bolster energy security. Five ASEAN members pursue nuclear energy, potentially having it by the 2030s, impacting data centre infrastructure and energy procurement.

20

Predicts shorter workdays with AI

Mark Cuban predicts larger companies will use AI agents to reduce employee workdays by at least an hour, maintaining pay. This shifts productivity rewards and demands comprehensive security guardrails from platform engineers to prevent data exposure.

21

Launches AI and Startup Center

Philippine business leaders and policymakers convened to discuss AI's impact, with a focus on its potential to democratise advanced analytics for ASEAN's 70 million MSMEs. This shift requires significant investment in AI infrastructure and regional cooperation to unlock productivity gains.

22

US Plans AI Policy Council with CEOs

Planned national AI strategy will solidify with direct industry input. The upcoming appointment of major tech CEOs to a federal AI policy council signals increased government-industry collaboration, shaping future AI regulation and investment priorities for founders and investors.

23

GitHub Copilot updates data policy for AI training

GitHub will now train Copilot AI models using interaction data from Free, Pro, and Pro+ users by default, effective April 24. This change requires users to actively opt out to prevent their code context and interactions from contributing to model improvements, impacting individual developers and procurement teams.

24

EU Chief Scrutinises Tech CEOs

Regulatory scrutiny on AI infrastructure and services intensifies. CTOs, architects, and procurement teams face compliance burdens and restrictions on integrated AI services, impacting platform choices and development strategies. The EU's examination of the entire AI stack signals potential interventions.

25

Releases Teen Safety Policies for AI

OpenAI releases open-source, prompt-based safety policies for developers. These provide a standardised baseline for teen safety in AI applications, reducing complexity for teams to implement effective protections against harms like graphic content and dangerous activities.

26

AI integral to telecom networks

TRAI Chairman Anil Kumar Lahoti declared AI integral to telecom networks, shifting from peripheral to core infrastructure. This mandates AI integration across network design and operations, impacting engineering teams and procurement as India prepares for AI-native 6G services.

27

Pentagon designates Anthropic supply chain risk

Anthropic's refusal to allow its AI for autonomous weapons led the DoD to designate it a "supply chain risk," barring Pentagon use. This contrasts with OpenAI's new defence deal and Palantir's pro-military stance.

28

Lags AI scribe oversight, risks patient safety

Unregulated AI scribes in Australian medical practice introduce patient safety and clinician liability risks. These tools can hallucinate, omit critical information, and introduce biases, leaving procurement and legal teams to manage unmitigated data security and consent challenges.

29

Pentagon Adopts Palantir Maven AI System

The Pentagon formally adopted Palantir's Maven AI system as a program of record, ensuring long-term funding and expanded use across all military branches. This locks military procurement into a specific vendor for critical command-and-control functions, standardising AI-enabled decision-making.

30

Justice Kant: AI to Assist Judges

Judicial systems globally now face a clear directive on AI's role, limiting its application to assistive functions and blocking its use in core adjudicatory processes. This establishes a policy boundary, constraining legal tech founders and architects designing judicial AI systems.

31

Pentagon Adopts Palantir Maven AI System

Palantir's Maven AI is now a core US military system, securing long-term funding and streamlined adoption. This provides warfighters rapid data analysis, reducing hours to minutes, but raises questions for security architects regarding integrated third-party AI.

32

Proposes Federal AI Preemption Legislation

Federal preemption of state AI laws would standardise compliance for founders and CTOs, reducing fragmented regulatory burdens. This mechanism limits states' ability to regulate AI development, potentially invalidating existing laws and centralising regulatory power at the federal level.

33

Proposes AI Token Compensation for Engineers

Nvidia CEO Jensen Huang proposed a new compensation model, offering engineers AI tokens alongside their salary. This directly links pay to AI agent deployment, creating complexities for HR and procurement teams in valuing and integrating this new economic model.

34

Trump Unveils National AI Framework

The Trump administration released a national AI policy framework, aiming for uniform federal rules. This challenges existing state AI laws, creating regulatory uncertainty for legal and compliance teams and potentially invalidating state-level protections.

35

Releases National AI Legislative Framework

The White House's new national AI framework aims to preempt state regulations, standardising the US AI landscape. This reduces compliance complexity for developers, potentially accelerating innovation by removing varied state-specific requirements.

36

Threatens Legal Action Against OpenAI AWS

Microsoft's reported legal threat in March 2024 against OpenAI and Amazon over AWS hosting created uncertainty for procurement teams. The dispute over API exclusivity signals increased vendor lock-in risks and potential restrictions on multi-cloud AI deployments.

37

Addresses AI Misuse in Legal Proceedings

The US Tax Court's new AI misuse guardrails, prompted by AI-generated legal hallucinations and data privacy concerns, will require legal and tax professionals to vet AI tools and secure sensitive taxpayer information, risking penalties up to $25,000.

38

Pentagon Bans Anthropic AI Tools

The Pentagon's ban on Anthropic's Claude creates significant operational disruption for military users. Replacing the embedded AI requires lengthy recertification processes and forces a return to manual workflows, incurring substantial costs and productivity losses.

39

Sues Pentagon Over AI Ban

Anthropic's legal challenge against the Pentagon's 'supply chain risk' designation, following its refusal to lift AI usage guardrails, sets a precedent for developers defining deployment terms and impacts government procurement.

40

Establishes AI Commission for Policy

New York's new FutureWorks Commission will shape the state's AI regulatory landscape, impacting founders and procurement teams. Its policy recommendations will influence energy demands for data centres and workforce adaptation, potentially shifting unit economics for AI infrastructure and talent acquisition.

41

Refuses Pentagon Surveillance Demands

Government AI adoption faces significant governance and data control challenges, constraining state deployment. AI companies' control over model safeguards and data usage policies directly limits government capabilities, requiring procurement teams to scrutinise vendor terms.

42

Telangana Unveils AI Job Plan

AI's potential to displace white-collar jobs is driving Telangana's new "Vision-2047" blueprint. This plan aims to create blue-collar employment and target a $3 trillion economy by 2047, signalling a state-level reorientation to mitigate AI's labour market impact.

43

Launches AI Education Manifesto for Southeast Asia

Open University Malaysia launched a manifesto for human-centric AI in education. This intervention calls for ethical, context-sensitive AI integration, shifting focus from technological capability to human and pedagogical purposes, addressing risks of uncritical adoption.

44

Accelerates AI Targeting in Military Operations

Military operations now execute targeting decisions at faster speeds, fundamentally altering the "kill chain" timeline. US Central Command confirmed AI tool deployment in the war against Iran, accelerating intelligence analysis from hours or days to seconds.

45

Colorado Proposes AI Liability Shift Framework

Colorado's AI Policy Working Group proposed a framework to revise the state's 2024 AI regulations, shifting liability for AI discrimination to courts. This redefines the risk landscape for AI developers and deployers, offering clearer guidance for "consequential decisions" while excluding consumer tools.

46

Reverses AI Copyright Stance After Backlash

The UK government reversed its AI copyright policy, abandoning an "opt-out" model for training data. This creates regulatory uncertainty for AI developers, who now lack clear guidelines for acquiring copyrighted content, while offering a reprieve to creative industries from automatic use of their work.

47

Urges IRS AI Tax Guidance

Businesses face significant audit risk and criminal liability for errors from AI tax preparation tools, as providers shift accountability to clients. Former Congressman Ryan Costello urged the IRS to issue federal guidance, preventing a patchwork of state rules and protecting businesses and CPAs.

48

Doubles Claude Off-Peak Access for Users

Anthropic doubled Claude usage limits for off-peak hours until March 27, 2026, for most subscribers. This reallocates computational demand, offering more capacity without extra cost during specific windows, and aims to reduce peak load on infrastructure.

49

Insurers Offer AI Malfunction Cover

AI liability is shifting from implicit to explicit, as insurers introduce specific malfunction coverage or exclusions. This change forces procurement and security teams to re-evaluate risk transfer strategies, with a projected $4.8 billion market by 2032 indicating significant new costs for AI deployment.

50

Oversight Board Calls for AI Rules

Undisclosed AI-generated content during Michigan tornadoes rapidly spread misinformation, enabling monetisation and prompting Meta's Oversight Board to demand new platform rules. This undermines accurate information access, requiring effective detection and mandatory disclosure systems.

51

Meta to establish AI content rules

AI-generated images and videos exploiting Michigan tornadoes went viral, deceiving users and monetising misinformation. Social media platforms face increased pressure to implement effective content provenance mechanisms and address algorithms amplifying fake content.

52

Urges LLM Safeguards for AI Safety

Rapid deployment of large language models without matching safety measures creates systemic risks, impacting user trust and economic potential. Procurement and security teams must address governance gaps, with nations like India poised to lead in testing and evaluation frameworks.

53

Revokes AI Export Mandate

The US Commerce Department revoked a proposed AI accelerator export rule, preventing a mandated doubling of hardware costs for foreign operators. This removes a significant financial constraint on international AI infrastructure development, though new export rules are still forthcoming.

54

Palantir CEO Clarifies DoD AI Use

Palantir CEO Alex Karp states the US Department of Defence has no domestic AI surveillance plans, amidst its dispute with Anthropic over usage terms. This conflict highlights tension between AI ethics and national security demands, complicating vendor contracts for founders and investors.

55

Warn Brussels on Digital Sovereignty Push

European businesses warn Brussels that its digital sovereignty push risks significant economic disruption. Unwinding deep dependencies on US software, cloud, and AI services cannot occur rapidly without undermining competitiveness and profitability, impacting procurement teams and CTOs.

56

Bans OpenClaw AI in State Agencies

China's central government restricted state-run entities from installing OpenClaw AI on office computers, citing data security risks. This move constrains procurement and security teams, highlighting Beijing's control over rapid AI integration despite local tech adoption and subsidies.

57

Democrats Question Pentagon AI Targeting

Congressional scrutiny of AI's role in military targeting establishes a precedent for accountability. Procurement teams evaluating defence AI must prioritise verifiable data provenance and human-in-the-loop protocols, especially after a strike linked to outdated intelligence killed over 170 civilians.

58

Expands AI Targeting in Iran

Military reliance on AI for target identification raises human oversight questions in lethal decisions. Procurement teams face new vendor complexities as the Defence Department labelled Anthropic a national security threat, threatening its removal from military use.

59

Microsoft Backs Anthropic Against Pentagon

The Pentagon's "supply chain risk" designation for Anthropic, challenged by Microsoft and former military leaders, sets a precedent for using national security labels in contract disputes. This creates uncertainty for procurement teams regarding vendor stability and AI use in defence.

60

Meta Criticized for AI Content Moderation

Meta's Oversight Board criticised the company's AI content detection, deeming its user-reporting system insufficient for crisis situations. This shifts pressure onto platforms to proactively identify deceptive AI-generated content, impacting security and moderation teams.

61

Deploys AI traffic cameras for enforcement

AI-driven road surveillance establishes a new baseline for driver monitoring in Scotland, raising privacy concerns for legal teams and individuals. The six-month trial quantifies mobile phone and seatbelt non-compliance, informing future enforcement strategies.

62

China restricts OpenClaw AI in state firms

Chinese authorities restricted state-run enterprises and government agencies from installing OpenClaw AI applications due to security risks. This directive blocks deployment and mandates removal of existing installations, limiting operational flexibility for procurement and security teams.

63

Policymakers Trial AI Companions for Elderly

Policymakers are trialling AI companions like ElliQ to combat loneliness, shifting care models towards technological intervention. Procurement teams and security architects must evaluate efficacy, ethics, and data privacy for these intimate AI systems.

64

Meta Board Rebukes AI Content Handling

Meta's Oversight Board demands an overhaul of AI content rules after a fake video depicting conflict damage garnered nearly 1 million views. The board states Meta's current moderation is insufficient for the scale of AI-generated content, particularly during crises.

65

Curbs AI Code Pushes After Outage

Amazon now requires senior approval for AI-assisted code pushes by junior and mid-level engineers, following a trend of "high blast radius" incidents caused by generative AI tools. This policy shift impacts engineering velocity and risk management, highlighting operational risks of agentic AI.

66

Launches AI Webinar Series for SMEs

Enterprise Ireland and Skillnet Ireland launched an AI webinar series for SMEs, aiming to demystify AI and accelerate adoption. This initiative provides a structured mechanism for business leaders to build foundational AI knowledge, potentially reducing the cost and timeline for AI exploration.

67

Faces DoD Supply Chain Risk Designation

OpenAI and Google employees filed an amicus brief supporting Anthropic against the DoD's 'supply-chain risk' designation. This creates unpredictability for procurement teams, as a domestic AI firm faces restrictions, potentially limiting government-AI collaboration on ethical grounds.

68

CDS Highlights AI Role in Warfare

India's Chief of Defence Staff General Anil Chauhan states AI and autonomous systems are decisive in future conflicts, requiring significant energy. This shifts defence strategy towards AI integration and highlights nuclear power's role for data centres, despite India's early stage in AI formulation.

69

China Prioritizes Tech Self-Sufficiency in Plans

China's legislature prioritises long-term tech self-sufficiency in AI, quantum, and semiconductors, alongside immediate domestic market growth. This dual focus aims to transform China into a tech-driven economy, intensifying state-backed competition and disrupting global tech markets.

70

X Suspends AI Disinformation Monetisation

Online creators monetised a significant wave of AI-generated disinformation about the US/Israel-Iran conflict, accumulating hundreds of millions of views. X responded by suspending monetisation for unlabelled AI-generated armed conflict footage, highlighting the clear need for effective detection and labelling mechanisms.

71

Warns AI developers on compliance

Australia's government warns tech giants to align AI deployments with national values, or face strict regulation. This increases sovereign oversight, requiring founders and CTOs to integrate values into AI roadmaps and procurement teams to conduct new due diligence, impacting market access.

72

Designated US Supply-Chain Risk

The US government's designation of Anthropic as a supply-chain risk, alongside OpenAI's Pentagon deal, introduces significant risk for startups pursuing government contracts. Procurement teams face increased scrutiny over vendor stability and ethical use, especially with changing contract terms.

73

Unveils AI Safety Roadmap

A cross-party coalition published the 'Pro-Human Declaration,' a framework for responsible AI development, advocating for human control and legal accountability. This highlights the immediate need for clear regulatory mechanisms, impacting procurement, security, and product roadmaps for frontier AI.

74

India CDS: AI Transforms Warfare Strategy

AI and autonomous systems will influence future warfare, demanding large amounts of energy. This shifts military might to data and networks, linking AI adoption directly to nuclear power development for data centres. Defence strategists and procurement teams face new requirements.

75

Mandates Strict AI Terms for Contracts

Access to US government AI contracts now demands vendors relinquish control over model usage and content. Procurement teams and founders seeking federal business must accept broad, irrevocable licensing terms, shifting liability and operational control to the government.

76

California Colleges Fund Flawed AI Systems

California community colleges commit millions to AI chatbots that deliver inaccurate information, forcing students to unofficial channels. This waste of public funds highlights critical failures in system integration and data sourcing for student support services.

77

Refuses Military AI Terms, Maintains Safeguards

Anthropic's refusal to remove military AI use safeguards signals significant supply chain risk for frontier models. Procurement teams must now scrutinise vendor terms, as AI's role in accelerated targeting demands immediate governance review.

78

Foundation Publishes AI Policy Review

The Hatch Foundation's 2025 Policy Review offers a multi-sectoral framework for AI's societal integration. It outlines how AI reshapes operations, highlighting the need for collaborative frameworks to manage risks and ensure beneficial deployment.

79

Clarifies AI Patent Disclosure Requirements

The Patent Trial and Appeal Board (PTAB) is clarifying AI patent disclosure requirements. Innovators must now explicitly detail AI model types, training, and data characteristics, moving beyond general descriptions to secure intellectual property.

80

Labeled as Supply Chain Risk by Pentagon

Anthropic's ethical stance became a commercial liability in defence procurement, as the Department of War's "supply-chain risk" designation jeopardises its contracts. This raises awareness for procurement teams and investors evaluating AI vendors, demonstrating how ethical positions can limit market access.

TOP 80 · LAST 72H · GA WEIGHTED