As AI innovation and development took off in South Korea, so did unethical and unreliable use of the technology. The country’s AI legal framework that passed in December 2024 was designed to place guardrails around the future of AI. However, its impact is yet to be determined.
Editor’s Note: Seungmin (Helen) Lee is a Senior Engagement Manager for cyber risk and Director of Intelligent Cyber Research at Next Peak. She manages cyber risk consulting engagements from proposals to stakeholder management to deliverables and oversees research projects on cybersecurity risks, AI, national cyber strategy, and more. She is also a 2025 #ShareTheMicInCyber Fellow with New America, researching cyber-secure AI data centers. Previously, she led the Blue Force Tracker Initiative, recording and analyzing international technological defense assistance provided to Ukraine since the onset of the war in 2022, for Cyber Defense Assistance Collaborative (CDAC). She has published pieces with Aspen Digital, Columbia University, The Diplomat, Stimson 38 North, and Tech Policy Press. She received her Master’s in International Affairs at Columbia University’s SIPA in May 2022 and Bachelor of Arts at Columbia University’s Columbia College in May 2021. Helen is fluent in English, Korean, Spanish, Japanese, and Chinese.
By Jenny Town, Senior Fellow and Director, Korea Program
When then-South Korean President Yoon Suk-yeol declared martial law late on the evening of December 3, 2024, opposition leader Lee Jae-myung thought the video he was watching on TV must be generated by artificial intelligence (AI). He was not the only one. Bank of Korea President Lee Chang-yong and officials in Yoon’s own presidential office had similar reactions.
Much of these reactions can be explained by the nature of the announcement, which was unprecedented and seemed to come from nowhere, but they also speak to a new era where AI has become so sophisticated that video can no longer be taken at face value.
South Korea, like other nations, had already been wrestling with how to regulate AI while allowing its domestic IT industry to be competitive in the fast-growing sector and weeks later, on December 26, its parliament passed the Basic Act on the Development of Artificial Intelligence (AI) and the Establishment of Foundation for Trustworthiness (“AI Basic Act”). In doing so, it became the first nation in the world to pass a comprehensive legal framework surrounding AI. At the time, the only other comprehensive AI legal framework was the one passed by the European Union (EU).
The National Assembly debated and merged 19 bills over four years to come up with the new law, which will come into effect in January 2026. The law attempts to both boost national competitiveness in AI and tackle a range of AI-related societal issues. Despite its ambitions, the impact of the act might not be as significant as lawmakers hope.
South Korea’s AI Efforts: From Innovation to Regulation
Before South Korea envisaged the AI Basic Act in July 2020, legislative efforts around AI had been focused on boosting national competitiveness. It began under Moon Jae-in in 2019 when a government report said the country needed to catch up to the US and Europe in AI capabilities. That led to the 2019 National Strategy for Artificial Intelligence, which aimed to foster AI innovation, capability, and knowledge and to place South Korea as the third-largest digitally competitive nation by 2030.
This focus on innovation continued into the Yoon Suk-yeol administration with the 2022 Support Plan for AI Semiconductor Industry Promotion, which promised $786 million over five years toward AI chip technology and the cultivation of 7,000 AI chip experts in Korea. Most recently, in February 2025, acting President Choi Sang-mok announced an investment of over $700 million and a “national AI team” to lead the creation of a large language model—a Korean “ChatGPT.”
However, AI-related risks—including the unethical use of AI and unreliable AI systems—became more prominent challenges for society.
In 2021, the iLuda chatbot was launched, only to be suspended within two weeks after it made racist and discriminatory remarks. It was also revealed that iLuda was trained on KakaoTalk chatlog data without user consent. In December 2023, prior to the 2024 National Assembly elections, the National Election Commission passed the Amendment to the Public Official Election Act that prohibited election-related deepfakes within 90 days of an election to combat disinformation and deepfakes that could influence the outcome.
In August 2024, President Yoon declared a national emergency due to rampant deepfake porn. Korean Telegram users, mostly teenage students, had uploaded sexually explicit AI-generated images of people they knew—most of the victims being underage females. In response, the National Assembly passed a law in September 2024 criminalizing the viewing of sexually explicit deepfakes with a sentence of up to three years in jail or a fine up to ₩50,000,000 (about US$36,265). As a result of this law, the National Policy Agency received 921 reports on deepfake sex crimes between January and October 2024, and 474 individuals were arrested in the same time period.
The South Korean AI Basic Act emphasizes both fostering and regulating the AI industry. To promote innovation, the act establishes an AI policy center, AI safety research institute, and a national AI commission. It also promotes AI transparency, requiring the use of labels on generative AI and supports the safe use, ethics and reliability, and standardization of AI technology.
Like the European Union AI Act, the South Korean law takes a risk-based approach, requiring impact assessments of “High-impact AI systems,”1South Korea’s AI Basic Act’s Chapter 1 Article 2 lists AI systems used in the following sectors or processes as high impact: energy supply, drinking water production and management, health care operations, medical device usage and development, nuclear materials or facilities management and operation, analysis and use of biometric information in investigations, judgements and evaluations that impact individual rights such as employment and loan screening, major operation and management of transportation systems and facilities, decision-making processes by local governments and public institutions that could impact the public, student evaluations from preschool to secondary education, and other areas that could impact the human life or basic rights. which it broadly defines as those that pose significant risks or impacts on human life, physical safety, or fundamental rights.
Concerns and Limitations of the AI Basic Act
Some in the private sector welcome the law, believing it will support the growth of small and medium-sized AI businesses. Supporters expect the Ministry of Science and ICT (MSICT) to provide clearer implementation guidelines and sub-regulations in the second half of 2025. Yet, regardless of the fanfare around the law, the South Korean AI Basic Act is likely to have limited impact for a few reasons.
First, the law does not explicitly address some of the more critical societal issues, such as political and sexual deepfakes. Instead of including restrictions around criminal use or misuse of AI tools, the law focuses on putting guardrails around the integration of AI tools into critical systems, businesses, and legitimate processes. This gap will make it difficult to decrease the aforementioned public doubt of genuine content resulting from the existence of AI.
Second, critics argue that the law lacks a clear definition of “high-impact AI systems” and claims it could weaken the domestic AI industry by stifling innovation with regulation. Legal critics and experts worry that even though the law allows AI businesses to request confirmation from the MSICT on whether an AI system is high-impact, the definition in the act is too comprehensive with its definitions.
Finally, the act lacks enforcement mechanisms, although it does call for the development of lower-level laws, committees, groups, and commissions to establish further policies, law,s and standards to enforce the act. The act itself only outlines four instances that could lead to a fine of up to ₩30,000,000 (about US$21,760), including: the leaking of sensitive information by AI commission members; the lack of transparent user notification by high-impact AI system operators; the failure by an AI company to register a domestic address or agent; and non-compliance of orders issued under the law.
Dependent on Progress in 2025
The regulatory impact of the AI Basic Act heavily relies on the subsequent laws and committees that are to be put in place. In early 2025, the MSICT established a team to draft subordinate statutes detailing enforcement mechanisms for the law and created a task force to further develop the legal framework regarding high-impact AI systems—such as additional criteria, examples, and obligations.
Regardless of these additional efforts, critics are pushing for a delay in implementation to January 2029. Industry, lawmakers, and the MSICT are citing the importance of developing and innovating AI technologies at what they say is a critical moment in the AI revolution rather than inhibiting development with immature policies. The same critics also worry that the enforcement mechanisms may not be in place by 2026.
What the impact and effectiveness of the AI Basic Law will be is still unclear; further evaluation will be needed after this year of preparation and development.
Notes
- 1South Korea’s AI Basic Act’s Chapter 1 Article 2 lists AI systems used in the following sectors or processes as high impact: energy supply, drinking water production and management, health care operations, medical device usage and development, nuclear materials or facilities management and operation, analysis and use of biometric information in investigations, judgements and evaluations that impact individual rights such as employment and loan screening, major operation and management of transportation systems and facilities, decision-making processes by local governments and public institutions that could impact the public, student evaluations from preschool to secondary education, and other areas that could impact the human life or basic rights.
As AI innovation and development took off in South Korea, so did unethical and unreliable use of the technology. The country’s AI legal framework that passed in December 2024 was designed to place guardrails around the future of AI. However, its impact is yet to be determined.
Editor’s Note: Seungmin (Helen) Lee is a Senior Engagement Manager for cyber risk and Director of Intelligent Cyber Research at Next Peak. She manages cyber risk consulting engagements from proposals to stakeholder management to deliverables and oversees research projects on cybersecurity risks, AI, national cyber strategy, and more. She is also a 2025 #ShareTheMicInCyber Fellow with New America, researching cyber-secure AI data centers. Previously, she led the Blue Force Tracker Initiative, recording and analyzing international technological defense assistance provided to Ukraine since the onset of the war in 2022, for Cyber Defense Assistance Collaborative (CDAC). She has published pieces with Aspen Digital, Columbia University, The Diplomat, Stimson 38 North, and Tech Policy Press. She received her Master’s in International Affairs at Columbia University’s SIPA in May 2022 and Bachelor of Arts at Columbia University’s Columbia College in May 2021. Helen is fluent in English, Korean, Spanish, Japanese, and Chinese.
By Jenny Town, Senior Fellow and Director, Korea Program
When then-South Korean President Yoon Suk-yeol declared martial law late on the evening of December 3, 2024, opposition leader Lee Jae-myung thought the video he was watching on TV must be generated by artificial intelligence (AI). He was not the only one. Bank of Korea President Lee Chang-yong and officials in Yoon’s own presidential office had similar reactions.
Much of these reactions can be explained by the nature of the announcement, which was unprecedented and seemed to come from nowhere, but they also speak to a new era where AI has become so sophisticated that video can no longer be taken at face value.
South Korea, like other nations, had already been wrestling with how to regulate AI while allowing its domestic IT industry to be competitive in the fast-growing sector and weeks later, on December 26, its parliament passed the Basic Act on the Development of Artificial Intelligence (AI) and the Establishment of Foundation for Trustworthiness (“AI Basic Act”). In doing so, it became the first nation in the world to pass a comprehensive legal framework surrounding AI. At the time, the only other comprehensive AI legal framework was the one passed by the European Union (EU).
The National Assembly debated and merged 19 bills over four years to come up with the new law, which will come into effect in January 2026. The law attempts to both boost national competitiveness in AI and tackle a range of AI-related societal issues. Despite its ambitions, the impact of the act might not be as significant as lawmakers hope.
South Korea’s AI Efforts: From Innovation to Regulation
Before South Korea envisaged the AI Basic Act in July 2020, legislative efforts around AI had been focused on boosting national competitiveness. It began under Moon Jae-in in 2019 when a government report said the country needed to catch up to the US and Europe in AI capabilities. That led to the 2019 National Strategy for Artificial Intelligence, which aimed to foster AI innovation, capability, and knowledge and to place South Korea as the third-largest digitally competitive nation by 2030.
This focus on innovation continued into the Yoon Suk-yeol administration with the 2022 Support Plan for AI Semiconductor Industry Promotion, which promised $786 million over five years toward AI chip technology and the cultivation of 7,000 AI chip experts in Korea. Most recently, in February 2025, acting President Choi Sang-mok announced an investment of over $700 million and a “national AI team” to lead the creation of a large language model—a Korean “ChatGPT.”
However, AI-related risks—including the unethical use of AI and unreliable AI systems—became more prominent challenges for society.
In 2021, the iLuda chatbot was launched, only to be suspended within two weeks after it made racist and discriminatory remarks. It was also revealed that iLuda was trained on KakaoTalk chatlog data without user consent. In December 2023, prior to the 2024 National Assembly elections, the National Election Commission passed the Amendment to the Public Official Election Act that prohibited election-related deepfakes within 90 days of an election to combat disinformation and deepfakes that could influence the outcome.
In August 2024, President Yoon declared a national emergency due to rampant deepfake porn. Korean Telegram users, mostly teenage students, had uploaded sexually explicit AI-generated images of people they knew—most of the victims being underage females. In response, the National Assembly passed a law in September 2024 criminalizing the viewing of sexually explicit deepfakes with a sentence of up to three years in jail or a fine up to ₩50,000,000 (about US$36,265). As a result of this law, the National Policy Agency received 921 reports on deepfake sex crimes between January and October 2024, and 474 individuals were arrested in the same time period.
The South Korean AI Basic Act emphasizes both fostering and regulating the AI industry. To promote innovation, the act establishes an AI policy center, AI safety research institute, and a national AI commission. It also promotes AI transparency, requiring the use of labels on generative AI and supports the safe use, ethics and reliability, and standardization of AI technology.
Like the European Union AI Act, the South Korean law takes a risk-based approach, requiring impact assessments of “High-impact AI systems,”1South Korea’s AI Basic Act’s Chapter 1 Article 2 lists AI systems used in the following sectors or processes as high impact: energy supply, drinking water production and management, health care operations, medical device usage and development, nuclear materials or facilities management and operation, analysis and use of biometric information in investigations, judgements and evaluations that impact individual rights such as employment and loan screening, major operation and management of transportation systems and facilities, decision-making processes by local governments and public institutions that could impact the public, student evaluations from preschool to secondary education, and other areas that could impact the human life or basic rights. which it broadly defines as those that pose significant risks or impacts on human life, physical safety, or fundamental rights.
Concerns and Limitations of the AI Basic Act
Some in the private sector welcome the law, believing it will support the growth of small and medium-sized AI businesses. Supporters expect the Ministry of Science and ICT (MSICT) to provide clearer implementation guidelines and sub-regulations in the second half of 2025. Yet, regardless of the fanfare around the law, the South Korean AI Basic Act is likely to have limited impact for a few reasons.
First, the law does not explicitly address some of the more critical societal issues, such as political and sexual deepfakes. Instead of including restrictions around criminal use or misuse of AI tools, the law focuses on putting guardrails around the integration of AI tools into critical systems, businesses, and legitimate processes. This gap will make it difficult to decrease the aforementioned public doubt of genuine content resulting from the existence of AI.
Second, critics argue that the law lacks a clear definition of “high-impact AI systems” and claims it could weaken the domestic AI industry by stifling innovation with regulation. Legal critics and experts worry that even though the law allows AI businesses to request confirmation from the MSICT on whether an AI system is high-impact, the definition in the act is too comprehensive with its definitions.
Finally, the act lacks enforcement mechanisms, although it does call for the development of lower-level laws, committees, groups, and commissions to establish further policies, law,s and standards to enforce the act. The act itself only outlines four instances that could lead to a fine of up to ₩30,000,000 (about US$21,760), including: the leaking of sensitive information by AI commission members; the lack of transparent user notification by high-impact AI system operators; the failure by an AI company to register a domestic address or agent; and non-compliance of orders issued under the law.
Dependent on Progress in 2025
The regulatory impact of the AI Basic Act heavily relies on the subsequent laws and committees that are to be put in place. In early 2025, the MSICT established a team to draft subordinate statutes detailing enforcement mechanisms for the law and created a task force to further develop the legal framework regarding high-impact AI systems—such as additional criteria, examples, and obligations.
Regardless of these additional efforts, critics are pushing for a delay in implementation to January 2029. Industry, lawmakers, and the MSICT are citing the importance of developing and innovating AI technologies at what they say is a critical moment in the AI revolution rather than inhibiting development with immature policies. The same critics also worry that the enforcement mechanisms may not be in place by 2026.
What the impact and effectiveness of the AI Basic Law will be is still unclear; further evaluation will be needed after this year of preparation and development.
Notes
- 1South Korea’s AI Basic Act’s Chapter 1 Article 2 lists AI systems used in the following sectors or processes as high impact: energy supply, drinking water production and management, health care operations, medical device usage and development, nuclear materials or facilities management and operation, analysis and use of biometric information in investigations, judgements and evaluations that impact individual rights such as employment and loan screening, major operation and management of transportation systems and facilities, decision-making processes by local governments and public institutions that could impact the public, student evaluations from preschool to secondary education, and other areas that could impact the human life or basic rights.