16 °c
Columbus
Thursday, April 3, 2025

The Uncertain Future of AI Regulation in a Second Trump Term


The president’s AI action plan, slated to release this summer, will clarify how the U.S. intends to balance speed and safety within the U.S. AI framework. The current risk-tolerant, decentralized model has been great for speed and innovation, however China’s hybrid model of centralized safety and decentralized regulation, has the country catching up with the U.S. The U.S. lacks comprehensive federal AI legislation, with the recent House AI Task Force recommending the use of existing laws, regulations, and regulatory bodies to develop sector-specific AI regulations. Further national risk mitigation may be necessary for LLMs and other more complex uses of AI. The growing dissonance between the U.S., the EU and China, on a consensus for AI safety is hazardous. Should the U.S. wish to maintain leadership in AI development while ensuring safety and public trust, finding a balance will be key.

The Red Cell Project

The Red Cell series is published in collaboration with The National Interest. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges.

At the recent Paris summit on artificial intelligence (AI), Vice President JD Vance declared that the United States would go its own way on AI and pushed back against the European Union’s (EU) “excessive” safety-oriented regulations that stifle innovation. The criticism is valid. However, at this pivotal moment in the technology revolution, how the United States will promote rapid innovation without compromising safety is unclear.

AI is advancing faster than anyone imagined. In 2019, it could barely count to ten. Today, it outperforms humans on physics, biology, and chemistry tests. The U.S.-China race to develop machine intelligence that surpasses human intelligence is accelerating this progress. The more advanced AI becomes, the more foundational it will be to modern life and geopolitical power. Yet alongside AI’s immense potential come substantial risks. Fear of these risks has divided the technology community, and polls show that over two-thirds of Americans support the responsible development of the technology.

President Donald Trump could strike an appropriate balance, despite his push for deregulation and innovation and silence on oversight and risk mitigation. Congress supports innovation with guardrails, and the Final Report by the House AI Task Force provides a blueprint for industry leaders. Moreover, leading in AI means leading innovation while maintaining public trust, as Trump recognized in his first term. Trust requires safeguards, multiple surveys show. Furthermore, when administration officials discuss deregulation, they target diversity rules, not engineering safety. AI safety encompasses both engineering safety and societal impacts. If the goal is to use civil rights laws to address discrimination, as in Trump’s first term, this aligns with Congress’ proposal to regulate AI through existing laws—a proposal that Trump is likely to adopt.

The president’s AI action plan, due this summer, will clarify U.S. policy and position in the evolving global debate on AI governance. Although America’s risk-tolerant, decentralized model has been superior for innovation, and the risk-averse, centralized, EU model has set Europe back, China, with its hybrid of centralized safety and decentralized innovation rules, is catching up

Furthermore, U.S. and global attitudes toward technology giants have soured, underpinning the emerging global consensus on AI safety. The United States rejects this consensus; the EU and China support it. These three governance models will likely coexist globally, and maintaining public trust may confer a competitive advantage. Thus, failure to balance speed and safety within the decentralized U.S. AI framework could undermine American AI leadership. 

U.S. AI Governance: Executive Orders

The United States lacks comprehensive federal legislation enacted specifically to regulate AI. Instead, its governance model is grounded in executive orders, the most important of which is former President Joe Biden’s 2023 AI safety order. The most comprehensive of its kind, this order built on President Trump’s first-term initiatives aimed at maintaining American AI leadership and promoting trustworthy AI. Both administrations pursued safe, transparent AI deployment. Though Trump recently rescinded the Biden safety order, he is likely to restore some of it.

The order provides a template for future executive action on AI because it regulates the deployment of AI in the federal government without the benefit of AI-specific laws. Drawing on existing law and policy, federal agencies translated the order into sector-specific guidelines, which affected the private sector through federal safety requirements in government contracts. The industry also made voluntary commitments to improve safety.

House AI Task Force Report: Key Takeaways

Congress adopted a similar approach to AI governance in the December 2024 Final Report by the House AI Task Force. The report proposes using existing laws, regulations, and regulatory bodies to develop sector-specific AI regulations. This proposal puts federal agencies with deep sectoral expertise, like the Food and Drug Administration and the Department of Health and Human Services, in charge of both regulating AI in industries under their jurisdictions and determining when AI-specific laws are necessary.

Lawmakers implicitly ruled out passing comprehensive legislation regulating AI. Punting the crucial question of federal preemption—whether federal law should override state law—they proposed further study on how federal and state regulations affect AI development and use. By focusing on sector-specific risks and leveraging the expertise of federal agencies, Congress’ approach is likely to inspire public trust while reducing the risk of overregulation hindering innovation. Finally, the task force’s call for safeguards is clear. One of the report’s seven key principles of AI governance is to protect against AI risks through technical and policy solutions, including evidence-based, industry-led, testing.

The Unique Challenges of Regulating Large Language Models

The report fails to address the unique, crosscutting challenges posed by large language models (LLMs). LLMs, like ChatGPT, are poorly understood—even by their creators. They predict text sequences based on their exposure to massive datasets, but their behavior is shaped by training, not programming. Thus, they are prone to hallucination—generating false or misleading information. The consequences could be catastrophic in high-risk sectors like healthcare and finance. Imagine an LLM misinterpreting a patient’s symptoms, resulting in a misdiagnosis, or misinterpreting market signals, leading to trading decisions that trigger financial instability. Worse yet, LLMs are not easily “turned off” in emergencies, and ensuring human control is difficult, as instances of LLMs attempting to override it demonstrate. 

These sobering realities underscore the imperative for safeguards, which could take many forms. Establishing a national AI safety institute and updating risk-mitigation standards, as Congress is contemplating, would improve transparency in AI algorithms, for instance. It would not, however, eliminate catastrophic risks. Unless experts learn how AI reasons and can fully reverse engineer algorithms, human control over this technology cannot be guaranteed. As the public demands, binding commitments by industry on safety are needed to hedge against catastrophic consequences.

Deregulation and Safety: The Trump Approach

While safety is critical, the pressure to outrun China has intensified. The January 2025 announcement that Chinese AI labs developed reasoning AI models, like DeepSeek, which rival their U.S. competitors at a fraction of the cost, shocked the industry and the administration. The rapid pace of Chinese innovation, despite U.S. restrictions on Beijing’s access to AI chips, is pushing Washington to prioritize innovation and ease potential regulatory barriers. The targets of deregulation, however, do not appear to be engineering safety but ending diversity, equity, and inclusion (DEI) programs. This explains the motivation for rescinding Biden’s AI safety order, which included DEI provisions that Trump officials deem “biased.”

Advocates for the Biden order argue that it also contains essential safety engineering principles vital to the responsible adoption of any technology with widespread societal impact. Republicans would likely support the preservation of engineering standards for that same reason. Key industry figures in the Trump administration, such as AI and crypto czar David Sacks, and senior advisor to President Elon Musk, would also be present. They are focused on “ideological bias” as a deregulatory target, but Sacks, as the White House liaison with industry, will need to factor in Silicon Valley’s support for safety guardrails. Further, he is expected to address AI use in critical sectors, suggesting an emphasis on engineering safety. Musk, an AI industry leader, might wield greater influence and has consistently supported AI safety. In 2023, he endorsed a six-month pause on training powerful AI models and supported California Senate Bill 1047, which proposed stricter regulations on large-scale models. Musk has previously called AI an “existential threat.”

The Global Context: Diverging AI Governance Models

As the United States grapples with AI regulation at home, the EU and China are developing their own governance models that reflect their “distinct cultural, political and economic perspectives,” given AI’s pervasive societal impact. These traditions shape their risk-benefit calculation, as well as centralization vs. decentralization in designing regulations.

The EU AI Act is a centralized, risk-averse, rights-focused model based on a traditional product safety framework. All obligations are on developers of AI systems, and LLMs are strictly regulated. However, the act is not the last word in the EU. Its implementing rules are being drafted, and European industry concerns about its chilling impact on AI development, echoed by Vance at the AI summit, have not yet been addressed. China’s model is a hybrid between the centralized, top-down EU model and the decentralized, bottom-up U.S. model: centralized on safety but decentralized on innovation, with some competition permitted.

Although there is no global AI governance framework, a global consensus on AI safety is emerging—supported by the EU and China but not the United States. This divergence reinforces the likelihood that the three governance models will coexist globally. U.S. technology giants (Big Tech) can probably bear the regulatory burden; alternatively, Trump could implement his threatened tariffs on the EU if Brussels continues targeting U.S. corporations. U.S. corporate adaptability to multiple regulatory regimes in the world’s major technology markets is crucial to maintaining America’s lead in AI development and standards-setting—a private sector-led process.

The Risks of a Fragmented Global Landscape

Nonetheless, extreme divergence on AI safety amid fragmented global governance could cost the United States. While America leads the world’s top ten most successful AI ecosystems by a wide margin, with China a distant second and the EU, represented by France and Germany, sixth and eighth respectively, China is catching up, despite its centralized safety regulations and U.S. policies constraining its progress. Furthermore, the last decade has seen a sea change in how Big Tech is viewed at home and abroad, as the disruptive impact of technology is felt across society, sparking anxieties about job displacement, data privacy, and humans losing control over AI.

If the United States releases AI models rapidly but sacrifices safety for speed, the risk of mishaps will increase—provoking another major public backlash against Big Tech, reminiscent of 2018. AI development is highly collaborative and benefits from global knowledge-sharing, which will become challenging if the United States remains outside the global consensus on safety. AI firms understand these risks and that preventing potential hazards may confer considerable competitive advantage.

Trump’s Likely Strategy: Balancing Innovation, Safety, and Global Leadership

Consequently, the Trump AI framework is likely to resemble Biden’s. It may be composed of initiatives to boost innovation, executive orders on safety, and spur congressional safety legislation aligned with minimalist regulation, like the AI Advancement and Reliability Act and the Future of AI Innovation Act, which would establish the U.S. AI Safety Institute and safety measures.

Some argue that without comprehensive federal AI laws, the growing patchwork of state laws and regulations will create an unpredictable regulatory environment, setting back innovation, and that alternative governance models like the EU AI Act could dominate globally. However, as the veto of California Senate Bill 1047 demonstrates, states are also concerned about overregulation strangling innovation. Europe’s technology sector echoes this concern regarding the AI Act, arguing that it puts Europe at a competitive disadvantage with the United States.

Making AI Work for Humans

AI is poised to become the central strategic technology of the early twenty-first century. The challenge for the United States is clear: maintain leadership in AI development while ensuring safety and public trust. Striking that balance is a practical necessity, essential to the responsible, widespread deployment of AI in the United States, and to ensuring that AI works for humans.

Decentralized AI governance promotes agility and innovation but also comes with the risk of inadequate federal safety regulations. The fact that the EU and China have adopted stricter safety and ethical rules that align with the global consensus on AI safety and that Congress also supports innovation with guardrails suggests the United States should adjust its approach—not by discarding its innovation ethos but by integrating meaningful safeguards into its AI governance. This would ensure that U.S. companies remain competitive in innovation while securing public trust.

As governments wrestle with AI’s immense promise and daunting risks, they would be wise to heed Professor Stephen Hawking’s warning that AI “will be either the best or the worst thing, ever to happen to humanity.” Decisions made today will determine which future the United States embraces.

The president’s AI action plan, slated to release this summer, will clarify how the U.S. intends to balance speed and safety within the U.S. AI framework. The current risk-tolerant, decentralized model has been great for speed and innovation, however China’s hybrid model of centralized safety and decentralized regulation, has the country catching up with the U.S. The U.S. lacks comprehensive federal AI legislation, with the recent House AI Task Force recommending the use of existing laws, regulations, and regulatory bodies to develop sector-specific AI regulations. Further national risk mitigation may be necessary for LLMs and other more complex uses of AI. The growing dissonance between the U.S., the EU and China, on a consensus for AI safety is hazardous. Should the U.S. wish to maintain leadership in AI development while ensuring safety and public trust, finding a balance will be key.

The Red Cell Project

The Red Cell series is published in collaboration with The National Interest. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges.

At the recent Paris summit on artificial intelligence (AI), Vice President JD Vance declared that the United States would go its own way on AI and pushed back against the European Union’s (EU) “excessive” safety-oriented regulations that stifle innovation. The criticism is valid. However, at this pivotal moment in the technology revolution, how the United States will promote rapid innovation without compromising safety is unclear.

AI is advancing faster than anyone imagined. In 2019, it could barely count to ten. Today, it outperforms humans on physics, biology, and chemistry tests. The U.S.-China race to develop machine intelligence that surpasses human intelligence is accelerating this progress. The more advanced AI becomes, the more foundational it will be to modern life and geopolitical power. Yet alongside AI’s immense potential come substantial risks. Fear of these risks has divided the technology community, and polls show that over two-thirds of Americans support the responsible development of the technology.

President Donald Trump could strike an appropriate balance, despite his push for deregulation and innovation and silence on oversight and risk mitigation. Congress supports innovation with guardrails, and the Final Report by the House AI Task Force provides a blueprint for industry leaders. Moreover, leading in AI means leading innovation while maintaining public trust, as Trump recognized in his first term. Trust requires safeguards, multiple surveys show. Furthermore, when administration officials discuss deregulation, they target diversity rules, not engineering safety. AI safety encompasses both engineering safety and societal impacts. If the goal is to use civil rights laws to address discrimination, as in Trump’s first term, this aligns with Congress’ proposal to regulate AI through existing laws—a proposal that Trump is likely to adopt.

The president’s AI action plan, due this summer, will clarify U.S. policy and position in the evolving global debate on AI governance. Although America’s risk-tolerant, decentralized model has been superior for innovation, and the risk-averse, centralized, EU model has set Europe back, China, with its hybrid of centralized safety and decentralized innovation rules, is catching up

Furthermore, U.S. and global attitudes toward technology giants have soured, underpinning the emerging global consensus on AI safety. The United States rejects this consensus; the EU and China support it. These three governance models will likely coexist globally, and maintaining public trust may confer a competitive advantage. Thus, failure to balance speed and safety within the decentralized U.S. AI framework could undermine American AI leadership. 

U.S. AI Governance: Executive Orders

The United States lacks comprehensive federal legislation enacted specifically to regulate AI. Instead, its governance model is grounded in executive orders, the most important of which is former President Joe Biden’s 2023 AI safety order. The most comprehensive of its kind, this order built on President Trump’s first-term initiatives aimed at maintaining American AI leadership and promoting trustworthy AI. Both administrations pursued safe, transparent AI deployment. Though Trump recently rescinded the Biden safety order, he is likely to restore some of it.

The order provides a template for future executive action on AI because it regulates the deployment of AI in the federal government without the benefit of AI-specific laws. Drawing on existing law and policy, federal agencies translated the order into sector-specific guidelines, which affected the private sector through federal safety requirements in government contracts. The industry also made voluntary commitments to improve safety.

House AI Task Force Report: Key Takeaways

Congress adopted a similar approach to AI governance in the December 2024 Final Report by the House AI Task Force. The report proposes using existing laws, regulations, and regulatory bodies to develop sector-specific AI regulations. This proposal puts federal agencies with deep sectoral expertise, like the Food and Drug Administration and the Department of Health and Human Services, in charge of both regulating AI in industries under their jurisdictions and determining when AI-specific laws are necessary.

Lawmakers implicitly ruled out passing comprehensive legislation regulating AI. Punting the crucial question of federal preemption—whether federal law should override state law—they proposed further study on how federal and state regulations affect AI development and use. By focusing on sector-specific risks and leveraging the expertise of federal agencies, Congress’ approach is likely to inspire public trust while reducing the risk of overregulation hindering innovation. Finally, the task force’s call for safeguards is clear. One of the report’s seven key principles of AI governance is to protect against AI risks through technical and policy solutions, including evidence-based, industry-led, testing.

The Unique Challenges of Regulating Large Language Models

The report fails to address the unique, crosscutting challenges posed by large language models (LLMs). LLMs, like ChatGPT, are poorly understood—even by their creators. They predict text sequences based on their exposure to massive datasets, but their behavior is shaped by training, not programming. Thus, they are prone to hallucination—generating false or misleading information. The consequences could be catastrophic in high-risk sectors like healthcare and finance. Imagine an LLM misinterpreting a patient’s symptoms, resulting in a misdiagnosis, or misinterpreting market signals, leading to trading decisions that trigger financial instability. Worse yet, LLMs are not easily “turned off” in emergencies, and ensuring human control is difficult, as instances of LLMs attempting to override it demonstrate. 

These sobering realities underscore the imperative for safeguards, which could take many forms. Establishing a national AI safety institute and updating risk-mitigation standards, as Congress is contemplating, would improve transparency in AI algorithms, for instance. It would not, however, eliminate catastrophic risks. Unless experts learn how AI reasons and can fully reverse engineer algorithms, human control over this technology cannot be guaranteed. As the public demands, binding commitments by industry on safety are needed to hedge against catastrophic consequences.

Deregulation and Safety: The Trump Approach

While safety is critical, the pressure to outrun China has intensified. The January 2025 announcement that Chinese AI labs developed reasoning AI models, like DeepSeek, which rival their U.S. competitors at a fraction of the cost, shocked the industry and the administration. The rapid pace of Chinese innovation, despite U.S. restrictions on Beijing’s access to AI chips, is pushing Washington to prioritize innovation and ease potential regulatory barriers. The targets of deregulation, however, do not appear to be engineering safety but ending diversity, equity, and inclusion (DEI) programs. This explains the motivation for rescinding Biden’s AI safety order, which included DEI provisions that Trump officials deem “biased.”

Advocates for the Biden order argue that it also contains essential safety engineering principles vital to the responsible adoption of any technology with widespread societal impact. Republicans would likely support the preservation of engineering standards for that same reason. Key industry figures in the Trump administration, such as AI and crypto czar David Sacks, and senior advisor to President Elon Musk, would also be present. They are focused on “ideological bias” as a deregulatory target, but Sacks, as the White House liaison with industry, will need to factor in Silicon Valley’s support for safety guardrails. Further, he is expected to address AI use in critical sectors, suggesting an emphasis on engineering safety. Musk, an AI industry leader, might wield greater influence and has consistently supported AI safety. In 2023, he endorsed a six-month pause on training powerful AI models and supported California Senate Bill 1047, which proposed stricter regulations on large-scale models. Musk has previously called AI an “existential threat.”

The Global Context: Diverging AI Governance Models

As the United States grapples with AI regulation at home, the EU and China are developing their own governance models that reflect their “distinct cultural, political and economic perspectives,” given AI’s pervasive societal impact. These traditions shape their risk-benefit calculation, as well as centralization vs. decentralization in designing regulations.

The EU AI Act is a centralized, risk-averse, rights-focused model based on a traditional product safety framework. All obligations are on developers of AI systems, and LLMs are strictly regulated. However, the act is not the last word in the EU. Its implementing rules are being drafted, and European industry concerns about its chilling impact on AI development, echoed by Vance at the AI summit, have not yet been addressed. China’s model is a hybrid between the centralized, top-down EU model and the decentralized, bottom-up U.S. model: centralized on safety but decentralized on innovation, with some competition permitted.

Although there is no global AI governance framework, a global consensus on AI safety is emerging—supported by the EU and China but not the United States. This divergence reinforces the likelihood that the three governance models will coexist globally. U.S. technology giants (Big Tech) can probably bear the regulatory burden; alternatively, Trump could implement his threatened tariffs on the EU if Brussels continues targeting U.S. corporations. U.S. corporate adaptability to multiple regulatory regimes in the world’s major technology markets is crucial to maintaining America’s lead in AI development and standards-setting—a private sector-led process.

The Risks of a Fragmented Global Landscape

Nonetheless, extreme divergence on AI safety amid fragmented global governance could cost the United States. While America leads the world’s top ten most successful AI ecosystems by a wide margin, with China a distant second and the EU, represented by France and Germany, sixth and eighth respectively, China is catching up, despite its centralized safety regulations and U.S. policies constraining its progress. Furthermore, the last decade has seen a sea change in how Big Tech is viewed at home and abroad, as the disruptive impact of technology is felt across society, sparking anxieties about job displacement, data privacy, and humans losing control over AI.

If the United States releases AI models rapidly but sacrifices safety for speed, the risk of mishaps will increase—provoking another major public backlash against Big Tech, reminiscent of 2018. AI development is highly collaborative and benefits from global knowledge-sharing, which will become challenging if the United States remains outside the global consensus on safety. AI firms understand these risks and that preventing potential hazards may confer considerable competitive advantage.

Trump’s Likely Strategy: Balancing Innovation, Safety, and Global Leadership

Consequently, the Trump AI framework is likely to resemble Biden’s. It may be composed of initiatives to boost innovation, executive orders on safety, and spur congressional safety legislation aligned with minimalist regulation, like the AI Advancement and Reliability Act and the Future of AI Innovation Act, which would establish the U.S. AI Safety Institute and safety measures.

Some argue that without comprehensive federal AI laws, the growing patchwork of state laws and regulations will create an unpredictable regulatory environment, setting back innovation, and that alternative governance models like the EU AI Act could dominate globally. However, as the veto of California Senate Bill 1047 demonstrates, states are also concerned about overregulation strangling innovation. Europe’s technology sector echoes this concern regarding the AI Act, arguing that it puts Europe at a competitive disadvantage with the United States.

Making AI Work for Humans

AI is poised to become the central strategic technology of the early twenty-first century. The challenge for the United States is clear: maintain leadership in AI development while ensuring safety and public trust. Striking that balance is a practical necessity, essential to the responsible, widespread deployment of AI in the United States, and to ensuring that AI works for humans.

Decentralized AI governance promotes agility and innovation but also comes with the risk of inadequate federal safety regulations. The fact that the EU and China have adopted stricter safety and ethical rules that align with the global consensus on AI safety and that Congress also supports innovation with guardrails suggests the United States should adjust its approach—not by discarding its innovation ethos but by integrating meaningful safeguards into its AI governance. This would ensure that U.S. companies remain competitive in innovation while securing public trust.

As governments wrestle with AI’s immense promise and daunting risks, they would be wise to heed Professor Stephen Hawking’s warning that AI “will be either the best or the worst thing, ever to happen to humanity.” Decisions made today will determine which future the United States embraces.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?