Freedom House publishes annual Freedom on the Net Report
This week, Freedom House released its annual Freedom on the Net Report, with this year’s edition focusing on the role of online trust in free and fair elections. The report highlights two major concerns: censorship and the erosion of trust in the electoral process. According to Freedom House, internet users in fifty-six countries of the seventy countries surveyed were arrested for their political, social, or religious expression. Freedom House also evaluated efforts undertaken by governments during national elections to facilitate a healthy balance between misinformation takedowns and the promotion of free speech. The group graded governments’ efforts in this area based across four categories: transparency in decision-making on combatting misinformation, meaningful engagement with local civil society, independent implementation and democratic oversight, and adherence to international human rights standards. The report highlights South Africa’s approach during its May 2024 elections as an example of successful regulation; the country allowed the public to report false information, harassment, hate speech, and incitement to violence to the Electoral Commission of South Africa (IEC), which assessed the reports according to narrow criteria. If the reports met those criteria, the IEC would refer the complaint to the Electoral Court to determine if it violated election laws, to platforms to determine if it violated their terms of service, or to the media to publicly debunk. This robust reporting pipeline and system of checks on information evaluation could help prevent any one source from wielding too much power over information spaces while still addressing threats to the information space.
EU passes Cyber Resilience Act
The European Council (EC) adopted the Cyber Resilience Act (CRA) on Thursday, setting cybersecurity requirements for digital elements of products sold in the European Union. The CRA will significantly expand cybersecurity regulations for networked devices, especially Internet of Things (IoT) devices like cameras and fridges that connect to the internet, are often poorly secured; malicious hackers have frequently compromised IoT devices and integrated them into botnets, which can be used to hide malicious web traffic or launch distributed denial of service (DDoS) attacks against other systems. The CRA will require all products that are directly or indirectly connected to the internet to be free of known vulnerabilities; have secure settings and access controls; protect data confidentiality, integrity and availability; limit data processing and attack surfaces; mitigate exploitation risks; and provide security logs. Manufacturers will also be required to issue security updates for connected devices for at least five years after manufacture. Manufacturers failing to comply with the CRA will face fines of either $16.24 million or 15 percent of annual global turnover for the previous fiscal year, whichever is greater. The CRA will be published in the European Union’s Official Journal, and will enter into force twenty days after it’s published.
Meta takes down influence campaign in Moldova, days before country’s elections
More on:
Meta announced this week that it removed a network targeting Russian-speaking audiences in Moldova ahead of the country’s October 20 election, taking down seven accounts, twenty three pages, and one Group from Facebook, as well as twenty Instagram accounts. The network primarily revolved around concocted Russia-language news brands posing as legitimate news outlets and distributing content aimed at discrediting pro-European politicians, including President Maia Sandu, while supporting pro-Russian politicians and narratives. The Moldovan presidential elections coincide with a referendum on whether the country should enshrine its pursuit of EU membership into its constitution. Russia strongly opposes Moldova joining the EU and aligning itself more closely with Western democracies. Moldovan authorities reported this week that they had blocked dozens of Telegram channels and chat bots linked to a drive to pay voters to cast “no” ballots on an EU membership referendum also taking place on Sunday. The Telegram channels were reportedly organized by fugitive businessman Ilan Shor from Moscow, where he currently lives in exile, in order to undermine the referendum.
OpenAI releases report on use of its products for election misinformation
Net Politics
CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs. 2-4 times weekly.
OpenAI released a report on the use of its products for influence and cyber operations late last week. The report contains insights into the potential impact of generative AI on elections and real-world examples of the use of tools like ChatGPT in cyberattacks. OpenAI said it had detected four separate networks using its tools to influence elections over the past year; only one of those networks, an influence operation in Rwanda aimed at generating comments around the country’s election, involved the creation of purely electoral content. Other operations generated content across themes, such as an Iranian influence operation that generated social media comments and long-form articles on the U.S. election, politics in Venezuela, Scottish independence, and Western policy toward Israel. OpenAI also found several threat actors making use of generative AI tools across cyberespionage campaigns. Actors integrated these tools into various stages of their intrusion; one Chinese group appeared to use several of OpenAI’s tools to gather information on common vulnerabilities in a targeted system, debug code being used to break into the targeted system, and create phishing lures based in social engineering.
AI-based, pro-Trump bot campaign posted on X 130,000 times this year, is likely domestic in origin
A report from researchers at Clemson University detailed a large-scale information operation on the platform X (formerly known as Twitter) that has backed the Trump campaign and several Republican politicians for at least the past ten months. The researchers identified at least 686 accounts that are part of the network, which had collectively posted 130,000 times since January. Engagement with the bots was relatively low, constituting only 2,453 reposts and 3,131 likes, however, the network likely gained exposure by only replying to popular posts. The network has been decidedly pro-Trump in its posting, but it also appears to have been used to back Republican politicians in contested primary elections. It supported Republicans in Arizona, Montana, Ohio, Pennsylvania, and Wisconsin, and also backed a ballot measure in North Carolina that would strengthen the state’s voter ID laws. The bots on the network made heavy use of generative AI, evidenced by replies from the network that use stock responses from generative AI tools. The network’s operators appear to have adapted their tradecraft over time, switching from OpenAI’s more restrictive AI models to Dolphin LLM, which puts fewer controls in place than most other publicly available models. Notably, despite widespread concerns about foreign influence operations targeting the US elections, this network appears to be domestic in origin. Unlike previously observed foreign networks, the X network was hyperspecific in targeting a small number of politicians and electoral races, and did not focus on broader themes in American society that are often of interest to malicious actors. Experts said that, if the network is American, it most likely is not illegal under U.S. law.
Maya Schmidt is the intern for the Digital and Cyberspace Policy Program.
More on:
Cyber Week in Review: October 18, 2024
Freedom House publishes annual Freedom on the Net Report
This week, Freedom House released its annual Freedom on the Net Report, with this year’s edition focusing on the role of online trust in free and fair elections. The report highlights two major concerns: censorship and the erosion of trust in the electoral process. According to Freedom House, internet users in fifty-six countries of the seventy countries surveyed were arrested for their political, social, or religious expression. Freedom House also evaluated efforts undertaken by governments during national elections to facilitate a healthy balance between misinformation takedowns and the promotion of free speech. The group graded governments’ efforts in this area based across four categories: transparency in decision-making on combatting misinformation, meaningful engagement with local civil society, independent implementation and democratic oversight, and adherence to international human rights standards. The report highlights South Africa’s approach during its May 2024 elections as an example of successful regulation; the country allowed the public to report false information, harassment, hate speech, and incitement to violence to the Electoral Commission of South Africa (IEC), which assessed the reports according to narrow criteria. If the reports met those criteria, the IEC would refer the complaint to the Electoral Court to determine if it violated election laws, to platforms to determine if it violated their terms of service, or to the media to publicly debunk. This robust reporting pipeline and system of checks on information evaluation could help prevent any one source from wielding too much power over information spaces while still addressing threats to the information space.
EU passes Cyber Resilience Act
The European Council (EC) adopted the Cyber Resilience Act (CRA) on Thursday, setting cybersecurity requirements for digital elements of products sold in the European Union. The CRA will significantly expand cybersecurity regulations for networked devices, especially Internet of Things (IoT) devices like cameras and fridges that connect to the internet, are often poorly secured; malicious hackers have frequently compromised IoT devices and integrated them into botnets, which can be used to hide malicious web traffic or launch distributed denial of service (DDoS) attacks against other systems. The CRA will require all products that are directly or indirectly connected to the internet to be free of known vulnerabilities; have secure settings and access controls; protect data confidentiality, integrity and availability; limit data processing and attack surfaces; mitigate exploitation risks; and provide security logs. Manufacturers will also be required to issue security updates for connected devices for at least five years after manufacture. Manufacturers failing to comply with the CRA will face fines of either $16.24 million or 15 percent of annual global turnover for the previous fiscal year, whichever is greater. The CRA will be published in the European Union’s Official Journal, and will enter into force twenty days after it’s published.
Meta takes down influence campaign in Moldova, days before country’s elections
More on:
Meta announced this week that it removed a network targeting Russian-speaking audiences in Moldova ahead of the country’s October 20 election, taking down seven accounts, twenty three pages, and one Group from Facebook, as well as twenty Instagram accounts. The network primarily revolved around concocted Russia-language news brands posing as legitimate news outlets and distributing content aimed at discrediting pro-European politicians, including President Maia Sandu, while supporting pro-Russian politicians and narratives. The Moldovan presidential elections coincide with a referendum on whether the country should enshrine its pursuit of EU membership into its constitution. Russia strongly opposes Moldova joining the EU and aligning itself more closely with Western democracies. Moldovan authorities reported this week that they had blocked dozens of Telegram channels and chat bots linked to a drive to pay voters to cast “no” ballots on an EU membership referendum also taking place on Sunday. The Telegram channels were reportedly organized by fugitive businessman Ilan Shor from Moscow, where he currently lives in exile, in order to undermine the referendum.
OpenAI releases report on use of its products for election misinformation
Net Politics
CFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs. 2-4 times weekly.
OpenAI released a report on the use of its products for influence and cyber operations late last week. The report contains insights into the potential impact of generative AI on elections and real-world examples of the use of tools like ChatGPT in cyberattacks. OpenAI said it had detected four separate networks using its tools to influence elections over the past year; only one of those networks, an influence operation in Rwanda aimed at generating comments around the country’s election, involved the creation of purely electoral content. Other operations generated content across themes, such as an Iranian influence operation that generated social media comments and long-form articles on the U.S. election, politics in Venezuela, Scottish independence, and Western policy toward Israel. OpenAI also found several threat actors making use of generative AI tools across cyberespionage campaigns. Actors integrated these tools into various stages of their intrusion; one Chinese group appeared to use several of OpenAI’s tools to gather information on common vulnerabilities in a targeted system, debug code being used to break into the targeted system, and create phishing lures based in social engineering.
AI-based, pro-Trump bot campaign posted on X 130,000 times this year, is likely domestic in origin
A report from researchers at Clemson University detailed a large-scale information operation on the platform X (formerly known as Twitter) that has backed the Trump campaign and several Republican politicians for at least the past ten months. The researchers identified at least 686 accounts that are part of the network, which had collectively posted 130,000 times since January. Engagement with the bots was relatively low, constituting only 2,453 reposts and 3,131 likes, however, the network likely gained exposure by only replying to popular posts. The network has been decidedly pro-Trump in its posting, but it also appears to have been used to back Republican politicians in contested primary elections. It supported Republicans in Arizona, Montana, Ohio, Pennsylvania, and Wisconsin, and also backed a ballot measure in North Carolina that would strengthen the state’s voter ID laws. The bots on the network made heavy use of generative AI, evidenced by replies from the network that use stock responses from generative AI tools. The network’s operators appear to have adapted their tradecraft over time, switching from OpenAI’s more restrictive AI models to Dolphin LLM, which puts fewer controls in place than most other publicly available models. Notably, despite widespread concerns about foreign influence operations targeting the US elections, this network appears to be domestic in origin. Unlike previously observed foreign networks, the X network was hyperspecific in targeting a small number of politicians and electoral races, and did not focus on broader themes in American society that are often of interest to malicious actors. Experts said that, if the network is American, it most likely is not illegal under U.S. law.
Maya Schmidt is the intern for the Digital and Cyberspace Policy Program.
More on: