23 °c
Columbus
Friday, July 4, 2025

AI Security Strategy and South Korea’s Challenges


AI has emerged as a foundational component in the twenty-first-century digital power competition. Its influence extends far beyond technological innovation, encompassing military, cybersecurity, and sociocultural dimensions. Early leadership in core AI technologies—including AI algorithms, chipsets, and large language models—offers not only economic advantages but also strategic autonomy and technological sovereignty. However, due to its inherently dual-use nature, AI presents complex security challenges; it can be leveraged to enhance weapons automation, amplify the scale and precision of cyberattacks, and facilitate the rapid dissemination of disinformation.

There are already real-world examples of AI being used as a national security threat. In the Russia-Ukraine war, AI-powered drones were used for target recognition and automated strikes. During the Israel-Hamas conflict, AI technologies were used to identify attack targets. In addition, Google has reported that over 57 state-backed hacking groups—including those from China, Iran, North Korea, and Russia—are using AI to advance cyberattacks and information operations. These developments underscore the urgency of integrating AI security into national defense planning and international security frameworks.

South Korea’s AI Security Policy

In March 2024, South Korea adopted the UN resolution on safe, secure, and trustworthy AI. In May 2024, the South Korean government announced the “Seoul Declaration for Safe, Innovative, and Inclusive AI” during the AI Seoul Summit. Domestically, the government has pushed efforts to promote AI use and improve safety, including launching a national AI strategy, establishing the National AI Committee, and creating an AI Safety Research Institute. In January 2025, the “AI Basic Act” was passed and is scheduled to take effect in January 2026.

The AI Basic Act is a comprehensive law aimed at promoting the development of AI technology and building public trust. It covers national-level policies for AI development, support for industry growth, data center initiatives, ethical principles, obligations to ensure transparency and safety, and regulations for high-risk and generative AI systems. It introduces a risk management system to ensure the overall safety of AI systems and adopts a certification-based trust mechanism. It is designed to guarantee private sector autonomy, while the public sector oversees the process to ensure that appropriate levels of safety and accountability are upheld. In particular, AI systems that are likely to have a significant impact on human life, safety, or fundamental rights are designated as “high impact” and subject to strict safety requirements.

In addition to the National AI Committee, a new interagency body—the National AI Security Consultative Group—was launched in March 2025. To address national security threats arising from the use and spread of AI, the National Security Office was designated as the control tower, with the National Intelligence Service (NIS)—which oversees South Korea’s cybersecurity—serving as the lead coordinating agency.

Despite these broad efforts, South Korea’s AI strategy and legal framework still focus more on industry promotion and safety than national security. Article 4 of the AI Basic Act excludes national defense and security uses of AI from its scope, based on the assumption that other agencies handle those areas separately and are hard to align with civilian regulations. However, even outside the act, there is no clear legal basis for using AI in national defense or security, making it difficult to build solid strategies for such uses.

Also, unlike the EU AI Act—which classifies risk into four levels and applies regulation accordingly—South Korea’s framework minimizes or excludes regulation for most low-risk AI systems. But even low-risk systems can become serious national security threats if misused intentionally or systematically. This means a security-centered update is needed.

The AI Safety Research Institute, defined under the act, currently has only 14 staff members. The institute has been assigned a wide range of responsibilities—including safety evaluation, policy research, tech development, standardization, and international cooperation. Given this scope, it may not have the capacity to focus deeply on AI risks in defense and security contexts.

RelatedPost

Moreover, while the law includes provisions to ensure the safety and trustworthiness of AI systems, it does not explicitly use the term “cybersecurity,” nor does it address specific measures against hacking or cyberattacks. It also lacks clear connections to existing laws on cybersecurity or data protection. This suggests that cybersecurity—despite being a core component of AI security—is addressed only within the broader concept of general safety.

South Korea’s national AI strategy includes plans to develop AI-related defense policies and infrastructure, but the direction remains vague. The newly elected president emphasized AI innovation in campaign pledges, with no reference to AI security. Moreover, the roles of various ministries and working groups remain ambiguous and operate in parallel, raising concerns about overlap and a lack of coordination.

Strengthening AI Security in the United States and the United Kingdom

In 2024, the United States officially designated AI as a national security asset through its National Security Memorandum. Under the Biden administration, the government took the lead in strengthening AI-related security by issuing an Executive Order on Safe, Secure, and Trustworthy AI. This included efforts to set and verify security standards for AI models, including cybersecurity protections.

In 2025, the Trump administration repealed the previous executive order but emphasized economic security by issuing a new order—Removing Barriers to American Leadership in Artificial Intelligence. This order aimed to lower domestic regulatory barriers while maintaining national security through export controls that limit adversarial countries’ access to advanced AI technologies. Additionally, the 2025 National Defense Authorization Act emphasized the use of AI for national defense operations, cybersecurity, and biotechnology. The Department of Defense has also directed the use of AI to detect cyber threats within the Defense Industrial Base. U.S. intelligence and defense agencies, such as the Office of the Director of National Intelligence and the Department of Defense, are currently operating AI-based systems for threat detection and information analysis.

The United Kingdom also considers AI a national strategic asset. It has expanded AI use not only for economic growth but also for national security purposes, while simultaneously strengthening AI security. These priorities are reflected in the AI Opportunities Action Plan 2025 and the Strategic Defence Review. In terms of cybersecurity, the United Kingdom adopts a guidance-based, advisory approach rather than centralized regulation and seeks to enhance cybersecurity across the AI supply chain. In February 2025, the United Kingdom renamed its “AI Safety Institute” to the “AI Security Institute,” shifting the focus from general safety to countering national-level threats such as AI-driven cyberattacks, criminal use, and biological weapons.

These developments in U.S. national and economic security policy have a significant impact not only on South Korea but also on global tech policies, export strategies, and industrial ecosystems. In January 2025, the U.S. Bureau of Industry and Security announced a new export control policy for AI. Although it has now been repealed and is under review for replacement, the policy would expand the Entity List and apply the Foreign Direct Product Rule, which extends U.S. export restrictions to foreign-made products using U.S. technology. This move is seen as a way to fill gaps in allied export regulations and intensify restrictions on China. The recent U.S. Department of Commerce restrictions on Huawei chips are part of this trend. As a result, South Korea may face indirect pressure and regulatory risks concerning AI chip manufacturing and semiconductor exports. Such pressure could disrupt the stable supply of high-performance chips and hinder South Korea’s ability to secure computing infrastructure and develop key AI technologies for national security.

Moreover, the United States and the United Kingdom’s efforts to modernize their AI-based defense and cybersecurity systems could set new technical and policy standards for allies like South Korea. These developments highlight the growing need for interoperability and intelligence sharing among allied countries. For South Korea, this means ensuring a certain level of AI capability and cyber defense strength to remain a viable partner in international security cooperation and technological alliances.

Additionally, in contrast to their participation in the 2023–2024 AI Safety Summits, the United States and the United Kingdom did not sign the Paris Declaration on the ethical development of AI during the 2025 Paris summit. This came shortly after the release of China’s DeepSeek model and is viewed as part of a broader effort to assert leadership in AI and counter China’s growing influence. This demonstrates that even among allies, countries differ in their approaches to AI safety and security, which underscores the need for more multilayered and adaptive frameworks for international cooperation capable of addressing diverse and emerging issues.

South Korea’s Institutional Challenges and Cooperation

As global competition over AI intensifies, cooperation with trusted international allies has become essential, even for technologically advanced nations. South Korea, as a key U.S. ally and a strategic hub for Indo-Pacific security and advanced technology collaboration, plays a pivotal role. However, the current absence of a robust and comprehensive AI security framework in South Korea poses challenges to allied technological cohesion. Specifically, it may impede effective information sharing, reduce interoperability, and weaken joint deterrence and threat response strategies.

Strengthening South Korea’s AI security capabilities is essential to expand its responsibilities as a strategic partner. Doing so would not only reduce the overall security burden on the United States but also contribute to the strategic stability of their technology alliance. A stronger cooperation framework would also enhance mutual trust between the two countries and improve the effectiveness of joint risk management. This could be achieved through coordinated U.S.–South Korea collaboration that is flexibly adjustable based on shared interests. In particular, AI and semiconductor export control policies would benefit from close dialogue to ensure they are aligned with both countries’ long-term goals of technological cooperation and supply chain resilience. At the operational level, both countries should strengthen policy and technical cooperation to lay the groundwork for shared growth.

South Korea should consolidate its central role in the global semiconductor supply chain while also positioning itself as a strategic leader in AI technologies and security infrastructure. This requires dividing roles with allies based on complementary strengths in key areas—such as AI chips, cloud infrastructure, and data centers—while reinforcing its own core capabilities. Such efforts would contribute to building a more resilient and trust-based global supply chain.

Domestically, South Korea should establish the institutional framework for leveraging AI in defense and national security. This requires clear legal and strategic frameworks for the secure use of AI. It is also necessary to clarify and streamline the roles of various agencies and security councils to reduce overlap and establish coordinated structures. In addition to the existing AI Safety Institute—which primarily focuses on mitigating ethical and safety risks in the private sector—South Korea should consider designating a separate institution dedicated to addressing national security threats. This body should be empowered with clear mandates, sufficient resources, and the authority to develop and implement security-focused AI technologies and policies. Its responsibilities would include identifying emerging threats and ensuring the safe integration of advanced AI tools into military and governmental systems.

In parallel, South Korea must strengthen its technical capabilities in key areas such as AI-based threat detection, automated response systems, and cyberattack prediction. Active participation in U.S.- and UK-led AI safety assessment frameworks, global technical standards, and information-sharing mechanisms will be important. In particular, there must be a system to quickly assess how advanced private-sector technologies can be safely integrated into national security systems.

Ultimately, AI must be viewed not only as a driver of innovation and industrial development but also as a strategic asset essential to national defense. This requires adopting a comprehensive national security perspective on the AI ecosystem—including data, semiconductors, and cloud infrastructure—and promoting greater public and institutional awareness of its long-term strategic implications.

Sunha Bae is a visiting fellow in the Strategic Technologies Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. So Jeong Kim is an adjunct fellow (non-resident) in the Strategic Technologies Program at CSIS.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?