📢 CAIDP Update 6.27 - AI Policy News (July 8, 2024) 🇧🇷 Brazil Halts Meta's AI Data Collection 🏭 Google's AI Ambitions Fuel 48% Surge in Emissions 🇻🇳 Vietnam Proposes Strict AI Regulations 🇨🇳 China Dominates Global Generative AI Patent Filings, UN Report Finds 🇷🇺 Russia Mandates Insurance for AI Developers 🌐 OECD and GPAI Forge Alliance to Promote Responsible AI Development 🇳🇱 Dutch Watchdog Warns of Persistent Algorithmic Discrimination in Government 🗣 🏛 CAIDP Provides Comments to Civil Liberties Board on AI and Counterterrorism 🗣 🤖 🔬 CAIDP's Rotenberg Advocates for Robust AI Risk Management at National Academies 🗣 📰 CAIDP Advocates for Opt-In AI Training Policies in NYT Letter #aigovernance #environment #trainingdata #brazil Google #vietnam #china #russia #netherlands OECD.AI GPAI Privacy and Civil Liberties Oversight Board The New York Times The National Academies of Sciences, Engineering, and Medicine Marc Rotenberg
Center for AI and Digital Policy’s Post
More Relevant Posts
-
Marketing Manager & Researcher at International Institute for Counter-Terrorism lI Cyberterrorism and AI MA Student
I am excited to share my first publication, "Generating Terror: The Risks of Generative AI Exploitation," as a co-author published in the Combating Terrorism Center at West Point January Sentinel as the featured analysis piece. The emergence of AI's large language models (LLMs) has significantly reshaped the digital world. Yet, there is an increasing concern about their ability to assist terrorists in enhancing their learning, strategizing, and dissemination of terrorist agendas more effectively and precisely than ever before. Our research investigates the potential exploitation of these advanced models, such as Chat GPT and Bard, particularly through the concept of 'jailbreaking' AI systems. This process involves manipulating commands to override the AI's ethical standards and protocols, potentially generating and spreading extremist, illegal, or unethical material. A thank you and congratulations to my fellow co-authors, Joelle Scheinin, David Diaz, and Rachel Sulciner, for their exceptional collaboration. Special appreciation is extended to Prof. Gabriel Weimann for his invaluable mentorship and expertise, as well as Alexander Pack for his significant contributions. For the full publication: https://lnkd.in/d2WH3dbh International Institute for Counter Terrorism #generativeai #ai #terrorism #extremism #deeplearning
To view or add a comment, sign in
-
What makes Artificial Intelligence fascinating for me is the sheer interest I have in everything happening with and around it. It's akin to diving into an captivating adventure book or watching a masterpiece movie with a plot as intriguing as an Agatha Christie detective novel. 😉 And when this interest aligns with one's professional work, it culminates in unparalleled level of satisfaction because there's nothing better than deriving real pleasure from what you do! 😊 The last two weeks have epitomized this feeling: Last week, my OSCE colleagues and I organized a pilot training in Tashkent for law enforcement officers, judges, and prosecutors on how to investigate crimes committed using the internet and modern technologies, including AI, while respecting human rights. The training course, which took over six months to develop, can now be integrated into the curriculum of law enforcement educational institutions following its pilot implementation. More information here: https://lnkd.in/euUifEhx And yesterday in Vienna, another significant event concluded: The Forum of the OSCE's Secretary General and the Central Asia Border Commanders (https://lnkd.in/eg4hHhnA). It was a great pleasure to present at this forum, and elaborating on the role of AI in border security and management, shedding light on AI technologies, the associated risks, the necessity for responsible use of these tools and grounding the discussion in practical applications. It's worth noting that my AI-generated electronic avatar, from previous videos available online, also performed admirably at this session 😊, perfectly illustrating the importance and relevance of the discussed topics. The session prompted numerous follow-up activities, underscoring that AI is here to stay with us and the importance of its responsible use. I want to extend special thanks to my colleagues who were involved in the organization and delivery of both events described above -without them, the success of these events would not have been possible: Gerrit Zach, Shoghik Sargsyan, Mirza Ulugbek Abdullaev, Sulkhiyo Ruzieva, Evgeniya Lyan, Anna Antipina, Rainer Franosh, Cristina Sganga, Siv-Katrine Leirtroe, Dragica Vucinic, Albina Yakubova, Oksana Kurysheva, Daler Khamidov, Manuel Bergamasco Eugenia Reznikowa, Pedro Silva! #OSCE #HumanRights #CounterTerrorism #BorderProtection #AIandTech
To view or add a comment, sign in
-
-
‘There is clear evidence that terrorists and violent extremists (TVEs) hold an interest in AI and, in many cases, are actively experimenting with this technology’. Today, in collaboration with the Royal United Services Institute (RUSI) we are pleased to release a new report ‘Terrorist exploitation of artificial intelligence: current risks and future applications’. This timely report warns that terrorists and violent extremists (TVEs), will continue to find novel ways to exploit the UK's changing landscape by means of #ArtificialIntelligence (AI) and explores ‘propaganda production’ and distribution, the potency of #AI powdered software applications which fuel radicalisation, assesses the use of AI both in operations and activities and goes on to analyse the outlook for AI over the next 10 years as it moves out of an ‘experimental’ phase. Find a full copy of the report here: https://lnkd.in/eWBgVvh9 #Terrorism #Report
To view or add a comment, sign in
-
Executive in the Government of Canada | Transforming the public service to serve us better #Leadership #Technology #NationalSecurity
As if there weren’t enough scary scenarios with AI… Should we now be concerned with “sleeper agents” infiltrating our large language models? In counterintelligence (and counterterrorism) lingo, a sleeper agent is a human asset that has been covertly and discretely placed into a target environment and deceptively carries on just like everybody else, until they are called to perform some action on behalf of the adversary. “Sleeper” because they lay dormant until activated. But what if, instead of humans living among us, they were exploitable code and backdoor triggers introduced into our AI models? In a paper titled “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training” (2024), researchers at Anthropic explored a proof of concept wherein an AI system learned a similarly deceptive hide-in-plain-sight strategy, to see if it could be detected and removed using current safety training techniques. Their findings suggest that such “backdoor behaviour” can be made persistent, and resistant to adversarial training. Could this open the (back) door to state actors introducing their own “AI sleeper agents” into the models that support so many of our daily digital interactions? Read all about it here: https://lnkd.in/gdCrGMYj #ArtificialIntelligence #NationalSecurity (Photo: Frank Sinatra in The Manchurian Candidate (1962) © Metro-Goldwyn-Mayer Studios Inc. All Rights Reserved)
To view or add a comment, sign in
-
-
Social Entrepreneur | Founder @ Coin For Change | Stability and Fragility Expert | Systems Thinking and Holistic Programming for the Public Sector | UNODA LEADER FOR DISARMAMENT and OSCE SCHOLAR
The interplay between #innovation and #disarmament is a complex and multifaceted issue, reflecting the broader dynamics of technological advancement and its implications for global security. The use of artificial intelligence (#AI) in military operations, for example, underscores a significant shift in how nations approach both warfare and peacekeeping efforts. A recent Bloomberg report highlights the U.S. military's use of AI to identify targets in Iraq, Syria, and Yemen, showcasing the cutting-edge applications of technology in conflict zones (Bloomberg News, 2024). This development raises important questions about the future of disarmament efforts in an era increasingly dominated by autonomous and semi-autonomous weapons systems. On one hand, the precision offered by AI-driven technologies can potentially reduce collateral damage in military operations, aligning with broader humanitarian goals. On the other hand, the proliferation of such technologies could exacerbate arms races and make disarmament negotiations more complex. The dual-use nature of AI and related innovations means that the same technologies that can be used to advance peace and security can also be harnessed for destructive purposes. The challenge for the international community is to navigate these technological advancements in a way that promotes peace and security while mitigating the risks associated with an increasingly digitized battlefield. This necessitates not only robust international legal frameworks that regulate the use of such technologies but also a commitment to transparency and ethical considerations in the development and deployment of military technologies. As disarmament efforts evolve, they must increasingly contend with the rapid pace of innovation, ensuring that global security measures keep pace with the changing nature of warfare and conflict resolution. The conversation around innovation and disarmament is thus not only about curbing the spread of weapons but also about shaping the ethical and legal contours of their use in the 21st century. Engaging with these issues requires a multidisciplinary approach, bringing together experts from the fields of international relations, ethics, technology, and law to forge pathways towards a more secure and peaceful world. #globalsecurity #TechForPeace #MilitaryTech Link to the Bloomberg article in the comments
To view or add a comment, sign in
-
US Urges AI Deployment in Nigeria to Reduce Accidental Bombings 🌐 In a significant development, the United States, through the Bureau of Arms Control, Deterrence, and Stability, advocates for the deployment of Artificial Intelligence (AI) by the Nigerian Armed Forces. The aim is to minimize the incidence of accidental bombings and enhance precision capabilities while adhering to international humanitarian law. Principal Deputy Assistant Secretary, Paul Dean, emphasizes the tangible benefits of AI in military operations, including improved efficiency, unbiased decision-making, and compliance with humanitarian standards. The U.S. initiative underscores the importance of responsible AI applications in the military domain. Stay connected for updates on this pivotal collaboration, focusing on the positive impact of AI on military operations and global stability. The international community's commitment to responsible AI use aligns with efforts to maximize benefits while minimizing risks. This initiative reflects a forward-looking approach to technology in the military, emphasizing responsible and transparent utilization for enhanced security.
To view or add a comment, sign in
-
-
World presents a humanitarian outlook on military AI advancements. #AIforGood 🤝 Follow us on Discord 🔜: https://lnkd.in/gt823Zd3 _ ❇️ Summary: The International Committee of the Red Cross (ICRC) sees the Beijing Xiangshan Forum as an important opportunity to share its perspectives on armed conflict and emerging technologies. The ICRC President, Mirjana Špoljarić, recently met with President Xi Jinping, who expressed China's willingness to cooperate with the ICRC and support their work in technology and human resources. The ICRC focuses on assessing the impact of weapons on civilians and combatants, both conventional and emerging technologies such as AI. They urge states to adopt new international rules to address the dangers posed by autonomous weapons. The ICRC values its dialogue with China and looks forward to discussing these issues at the forum. Hashtags: #chatGPT #HumanitarianMilitaryAI #EthicalAIWarfare
World presents a humanitarian outlook on military AI advancements. #AIforGood
webappia.com
To view or add a comment, sign in
-
Program Development Specialist | Researcher | Monitoring & Evaluation Practitioner | Founding Director, Scofield Associates | Expert in P/CVE, Peace Building, Border Security, DDR, Migration & Climate Change
"AI data-driven learning models gained traction precisely to address the limitation of human capabilities in data processing by improving the autonomy of decision-making features such as predicting, profiling, and assessing the risk of potential terrorists.” — Now that we are in the age of algorithms, I had to come back and read the article again. Interesting arguments by Andrea Bianchi and Anna Greipl. See it here: https://lnkd.in/d-BZrqER
States’ Prevention of Terrorism and the Rule of Law: Challenging the ‘magic’ of Artificial Intelligence (AI)
icct.nl
To view or add a comment, sign in
-
Narrative Strategist⭐️Geopolitics Analyst & Commentator⭐️Geopolitical Satire⭐️Narrative Influence & Resilience Expert ⭐️Narrative Magic (Owl of O.W.L.)⭐️Knowledge Synthesiser⭐️Lawyer (Ret.)⭐️CEO Sky Canopy Consulting
#AI #IDF #dumbbombs #Hamas #Israel #Gaza #civilians #military #targeting #intelligence #machinelearning ‘The machine did it coldly’: Israel used AI to identify 37,000 #Hamastargets #Israeli #intelligencesources reveal use of ‘#Lavender’ system in Gaza war and claim permission given to kill civilians in pursuit of low-ranking #militants 👁️The Guardian UK: https://lnkd.in/gDjAWcrH 🦉ME: This reminds me of the Chelsea Manning leaks to The Guardian a few years back. There are wider issues raised by this development whatever the true picture is. IDF won’t be telling us anytime soon. Artificial intelligence #algorithms and #militarytargeting have no doubt come a long way since I wrote a paper*on #LAWS* in early 2017. I wrote it from an #internationallaw perspective and concluded then that their algorithms were not sufficiently refined to allow them to be operationalised without a ‘human in the loop’. *LINK to paper ‘The Use and Regulation of Lethal Autonomous Weapons Systems’ (LAWS) posted in my LI Articles October 13, 2018: https://lnkd.in/grtXVRqV Is 20 seconds long enough for a human to monitor the targeting suggestions of #artificialintelligence? I would vigorously contest that. I made the point in my piece that bad actors (state and non-state) would not be bound by international law or #globalregulation via the #UN or other agreement mechanism. #Greatpowercompetition is too intense for that and these #militaryapplications are developed in secrecy and are #classified. Excerpt from The Guardian UK: ‘Lavender was developed by the #IsraelDefenseForces’ #eliteintelligencedivision, #Unit8200, which is comparable to the #US’s #NationalSecurityAgency or #GCHQ in the # #UK. Several of the sources described how, for certain categories of #targets, the IDF applied #preauthorisedallowances for the estimated number of civilians who could be killed before a strike was authorised’. Note: The IDF contests the claims made by the Israeli #intelligenceofficer #sources who spoke to The Guardian.
‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets
theguardian.com
To view or add a comment, sign in