The rapid evolution of artificial intelligence (AI) has placed it at the centre of a global race for technological supremacy. In a major policy reversal, President Trump overturned President Biden’s 2023 executive order regulating AI, reigniting debates about how best to balance innovation, safety, and global competitiveness

 

Biden’s policy had mandated developers submit safety test results for AI systems posing risks to national security, public health, or the economy—a move criticised by opponents as excessive bureaucracy.

 

Trump’s approach aligns with a Republican preference for deregulation, arguing that overly stringent oversight stifles U.S. technological leadership. However, critics warn that dismantling safeguards leaves AI vulnerable to misuse, bias, and exploitation, potentially undermining national and global security.

 

 

A Clash of Philosophies

 

The policy shift highlights a deep ideological divide in governance. Biden’s executive order emphasised federal oversight to mitigate AI risks, requiring transparency from developers and empowering agencies like NIST to set safety standards.

 

Trump’s administration, in contrast, framed these measures as burdensome, advocating for a market-driven approach to maintain America’s competitive edge against China’s accelerating AI advancements.

 

Industry groups reinforced this stance, cautioning that mandatory disclosures could expose trade secrets, while GOP leaders labelled the regulations as “radical left-wing” constraints.

 

Geopolitical considerations further influenced Trump’s deregulatory stance. His administration argued that stringent regulations could allow China to dominate the AI sector, with the latter investing heavily in reasoning models that rival U.S. capabilities.

 

This strategic tension underscores a critical question:

 

Should safety or competitiveness take precedence in AI policy?

 

AI and America's losing time playing chess with China

 

Key Risks Targeted by Biden’s Executive Order

 

Biden’s policy aimed to address specific risks associated with advanced AI technologies, particularly generative systems such as ChatGPT, which are capable of creating text, images, and videos with minimal human input. The main threats included:

     

      • National Security: Mitigating AI-driven vulnerabilities in critical infrastructure and defence systems, such as hacking, misinformation, and autonomous weapons.

      • Economic Stability: Addressing job displacement due to automation and market distortions caused by AI monopolies.

      • Public Health & Safety: Ensuring AI models in healthcare, transportation, and law enforcement are free from biases, errors, and unsafe outcomes.

      • Cybersecurity: Protecting AI systems from adversarial attacks or manipulation.

      • CBRN Threats: Preventing AI misuse in developing chemical, biological, radiological, or nuclear weapons.

      • Ethical Risks: Mandating fairness audits to avoid discriminatory practices in areas like hiring, lending, and policing.

    Biden’s approach leveraged the Defense Production Act to enforce pre-release safety testing for high-risk AI systems, directed NIST to establish bias-correction frameworks, and encouraged federal agencies to adopt AI responsibly.

     

    This balanced approach aimed to position the U.S. as a leader in ethical AI innovation.

     

    Global Context: China’s AI Reasoning Surge

     

    Trump’s deregulatory push coincides with China’s aggressive investments in AI reasoning, a field that enables systems to perform complex, logic-driven tasks like coding, mathematical analysis, and scientific research. Key players include ByteDance (owner of TikTok) and DeepSeek.

       

        • DeepSeek’s open-source R1 model rivals OpenAI’s o1, offering cost-effective solutions that run on consumer hardware.

        • ByteDance’s Doubao-1.5-pro reportedly outperforms o1 in specific benchmarks, underscoring China’s technical capabilities.

      Supported by giants like Alibaba and iFlytek, these advancements could democratize high-performance tools, eroding America’s first-mover advantage. For instance, DeepSeek’s R1 model empowers startups and researchers to bypass expensive cloud services, lowering barriers to AI adoption globally.

       

      A Multipolar AI Race

       

      Beyond the U.S.-China rivalry, other nations are making significant AI investments:

         

          • EU: Focused on strict ethical guidelines through the AI Act.

          • India: Prioritizing affordable AI tools for agriculture and public services.

          • UAE & South Korea: Developing niche applications in healthcare and climate modelling.

          • Japan: Pioneering robotics-driven AI.

        This fragmented global landscape risks complicating international collaboration, as differing regulatory frameworks hinder interoperability.

         

        The Risks of AI Protectionism

         

        Protectionist measures by both the U.S. and China, such as export restrictions on semiconductors and talent hoarding, could unintentionally stifle innovation.

         

        For example, restricting access to advanced chips may delay breakthroughs even in countries imposing such controls. Similarly, visa policies limiting AI researchers could push talent toward more open markets like Canada or the EU.

         

        Smaller economies, however, are leveraging agility to gain ground. The UAE’s Falcon AI model and India’s public-private healthcare partnerships demonstrate how mid-sized players can bypass the pitfalls of protectionism and achieve rapid progress.

         

        Balancing Innovation and Safety

         

        The debate over AI regulation highlights a critical crossroads for policymakers. Biden’s approach focused on mitigating risks through precautionary oversight, while Trump’s strategy prioritised deregulation to accelerate innovation and counter China’s advances.

         

        As nations navigate this complex landscape, the stakes extend beyond technological leadership to include ethical implications, global collaboration, and societal impact.

         

        The path chosen will shape the future of AI development and determine which nations set the standards for its safe and ethical use. Balancing innovation with robust safeguards is essential to ensuring that AI serves humanity’s best interests in an increasingly interconnected world.

        Sign up for our newsletters to receive the latest developments and trends on innovation, technology and geopolitics of AI.

        Facebook
        X
        LinkedIn
        FOLLOW US
        NorthStar on LinkedIn
        CONTACT

        NorthStar Consulting UK
        Office 211
        73 Holloway Road
        London
        N7 8JZ

        info@northstar-consulting.co.uk

        NorthStar Consulting UK
        Privacy Overview

        This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.