Center for AI Policy

Center for AI Policy

Government Relations Services

Washington, DC 4,158 followers

Developing and promoting policy to mitigate catastrophic risks from AI

About us

The Center for AI Policy (CAIP) is a nonpartisan research organization dedicated to mitigating the catastrophic risks of AI through policy development and advocacy. Operating out of Washington, DC, CAIP works to ensure AI is developed and implemented with the highest safety standards.

Website
https://aipolicy.us/
Industry
Government Relations Services
Company size
2-10 employees
Headquarters
Washington, DC
Type
Nonprofit
Founded
2023

Locations

Employees at Center for AI Policy

Updates

  • View organization page for Center for AI Policy, graphic

    4,158 followers

    On Wednesday, June 26th, 2024 the Center for AI Policy held a briefing for House and Senate staff on Protecting Privacy in the AI Era: Data, Surveillance, and Accountability. The Center's Executive Director, Jason Green-Lowe, moderated a discussion between a panel of esteemed privacy experts: • Ben Swartz, Senior Technology Advisor at the Federal Trade Commission (FTC) • Brandon Pugh, Director of Cybersecurity and Emerging Threats Policy at the R Street InstituteManeesha Mithal, Partner at Wilson Sonsini Goodrich & Rosati and co-chair of the firm’s privacy and cybersecurity practice • Mark MacCarthy, Adjunct Professor at Georgetown University and Nonresident Senior Fellow at The Brookings Institution If you missed the event, you can watch a video recording here: https://lnkd.in/ebKv2X3d

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +1
  • View organization page for Center for AI Policy, graphic

    4,158 followers

    Meta Releases Llama 3.1 In May 2020, OpenAI released GPT-3, a large language model that used approximately 10^23 operations during training. About 23 months later, Meta released OPT-175B. The model was, roughly speaking, a replica of GPT-3. Now, about 16 months after OpenAI’s March 2023 release of GPT-4, Meta has again succeeded in catching up to OpenAI with the release of Llama 3.1, a “herd” of three Llama models with 8 billion (8B), 70B, and 405B parameters. Meta expended immense resources to build Llama 3.1-405B, which performs competitively with the best AI systems in the world. 16,384 NVIDIA #H100 chips executed over 10^25 operations to train the model. 16 chips cost about $400,000, so this full arsenal could easily be worth $400 million. Well over 200 employees worked on the project as “core contributors,” plus hundreds of additional employees in less involved roles. Top AI engineers can command multi-million dollar pay packages, so Meta may have paid over $100 million in labor costs. Over 15 trillion tokens served as text data for training the model. Meta offers limited information about the “variety of sources” behind this dataset. Nonetheless, data access deals can cost over $100 million per year. Lastly, varying hardware demands occasionally caused “instant fluctuations of power consumption across the data center on the order of tens of megawatts, stretching the limits of the power grid.” For reference, 10 megawatts would supply over 80,000 megawatt-hours in a year, enough to power thousands of US households. In total, Meta may have spent half a billion dollars on this project’s resources. And they are poised to spend more; CEO Mark Zuckerberg told Bloomberg that “we’re basically already starting to work on Llama 4.” The Llama 3.1 models—including versions without safety guardrails—are available for anyone on the internet to download. Facebook and Instagram users can access the 405B version for free through a chatbot interface at meta.ai. Meta tested Llama 3.1 for risks related to cyber, chemical, and biological weapons. It appeared to be safe. However, the testing failed to account for ways that malicious actors might modify, tune, and specialize the model to cause harm. This raises national security risks, as actors in China, Russia, and Iran are already using less-modifiable AI models to assist in influence operations and cyber attacks. This serious oversight in Meta’s safety testing demonstrates the need for stronger safety practices at billion-dollar AI companies. #AI #AIPolicy #Llama3 Pictured: The start of Meta's technical report on Llama 3.

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    4,158 followers

    The Center for AI Policy is proud to support this call for Congress to authorize the US AI Safety Institute.

    Ensuring the continued authority of the U.S. in the development of AI standards is central to maintaining our global leadership position in the field. That is why, alongside our friends at Information Technology Industry Council (ITI), we are calling on Congress to authorize the U.S. Artificial Intelligence Safety Institute (AISI) within the National Institute of Standards and Technology (NIST). We are proud to be joined in this effort by more than 45 leading industry, civil society, nonprofit, university, trade association, and research laboratory groups, all of whom are focused on accelerating the widespread adoption of AI. Lawmakers have an opportunity to bolster the U.S. AI ecosystem by enshrining the AISI in statute so that it can confidently develop the safety tools and guidelines which are foundational to guaranteeing trust and confidence in the technology. Read the full congressional letter and our statement: https://lnkd.in/evFdxgQj.

    Leading Tech Advocacy and Industry Groups Call on Congress to Authorize U.S. AI Safety Institute

    Leading Tech Advocacy and Industry Groups Call on Congress to Authorize U.S. AI Safety Institute

    https://responsibleinnovation.org

  • View organization page for Center for AI Policy, graphic

    4,158 followers

    Meta Conducts Limited Safety Testing of Llama 3.1 Last Tuesday, Meta released Llama 3.1, which it describes as the “first frontier-level open source AI model.” It was trained with 3.8 × 10^25 FLOP – enough to require pre-registration and basic benchmark testing under the Center for AI Policy (CAIP)’s model legislation, but not quite at the 10^26 FLOP frontier level that would trigger the need for a full licensing application. There are several troubling issues with the Llama 3.1 release. The draft standard from the Open Source Initiative calls on open-source models to provide “sufficiently detailed information about the data used to train the system, so that a skilled person can recreate a substantially equivalent system using the same or similar data.” However, Meta notes only that their data comes from “a variety of data sources,” which falls far short of the standard. Professor Kevin Bankston, the Senior Advisor on AI Governance for the Center for Democracy & Technology, also notes that the model comes with “extensive use restrictions,” which are inconsistent with open source methodology, and appears to lack “evals or mitigations for bias around race, gender, or other protected categories.” Another concern about Llama 3.1 is its energy usage: Meta admits that fluctuations in GPU activity can result in “can result in instant fluctuations of power consumption across the data center on the order of tens of megawatts, stretching the limits of the power grid.” As the power demands of AI training runs increase, it appears that we will need measures to limit the resulting pollution and ensure that consumers are not left without electricity. CAIP’s special focus is on catastrophic risk – we are concerned that powerful AI models could be used to help terrorists develop weapons of mass destruction, or that they could destabilize essential infrastructure, or that they could result in rogue AI agents spreading unchecked across the Internet. Full post here: https://lnkd.in/e7bw9dw7 HT Jason Green-Lowe

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    4,158 followers

    *** Today's AI Policy Daily highlights: Here is today's edition: 1. US security concerns over TikTok and ByteDance, with allegations of sensitive American user data stored in China. 2. Trump's shift towards supporting cryptocurrency, vowing to make the US the "crypto capital of the planet" and end "persecution" of the industry. 3. The Senate is preparing to vote on a kids' online safety bill that addresses online platforms' data collection on children for the first time since 1998. 4. New Jersey's $500 million tax credit initiative to become an AI epicenter. 5. The Biden administration's new AI measures include Apple signing on to voluntary AI commitments. July 29, 2024 Check it out - click here: https://lnkd.in/e5gvsc8b #ai #artificialintelligence #aipolicy #aiprogramming #airegulation #aisafety 

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    4,158 followers

    6️⃣ Stats Sunday — July 28, 2024: 19 seconds: Google DeepMind’s AlphaGeometry 2 solved a 2024 International Mathematics Olympiad (IMO) problem in 19 seconds. https://lnkd.in/gncrqvrc + Collectively, DeepMind’s AI systems solved four out of six problems from this year’s IMO. + “The IMO is the oldest, largest and most prestigious competition for young mathematicians, held annually since 1959. Each year, elite pre-college mathematicians train, sometimes for thousands of hours, to solve six exceptionally difficult problems.” 100k H100 GPUs: Elon Musk reports that his Memphis computing cluster has started AI training and contains 100,000 #H100 AI chips. https://lnkd.in/etsP5HHX + “This is a significant advantage in training the world’s most powerful AI by every metric by December this year.” + It’s unclear whether all 100,000 of the H100s are operational, or only some. 73%: In a new poll from Data for Progress and Accountable Tech, 73% US voters expressed support that “Congress should not fund the research and development of AI until they pass laws that require AI companies to implement safety guidelines and third-party testing on their products before being sold to the general public.” https://lnkd.in/eWbANf_e + The statement received strong, bipartisan support from 71% of Democrats and 74% of Republicans. + Respondents were asked to choose between this statement and a similar, unpopular one that Congress should fund R&D “regardless” of passing safety requirements. $500 million: The generative AI startup Cohere raised $500 million. https://lnkd.in/dCDURJVs + Investors include Cisco, AMD, and Fujitsu. $84.7 billion: In the second financial quarter of 2024, Google’s parent company Alphabet earned $84.7 billion in revenue. https://abc.xyz/investor/ + That’s over $10 billion more than the second quarter of 2023. $276 billion: The emerging industry of AI Assurance Technology (AIAT) could reach an annual value of $276 billion globally by 2030, according to a comprehensive report. https://www.aiat.report/ + AIAT is “the software, hardware, and services that enable organizations to more effectively, more efficiently, or more precisely mitigate the risks of AI.” #AI #AIPolicy #SixStatsSunday

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    4,158 followers

    CAIP Proposes 2024 AI Action Plan The Center for AI Policy (CAIP) appreciates the difficult legislative environment currently facing Congress. Although there is a critical need to address AI policy, this year’s Congressional calendar is significantly abbreviated. The upcoming election, the political conventions, and the divisiveness arising as members of Congress and the national parties seek to contrast policy differences to voters and posture for political advantage all make it more challenging to pass major legislation. However, some challenges demand immediate attention. The illicit use of artificial intelligence (AI) tools, like deepfakes, threatens American election integrity and empowers fraudsters preying on Americans. These challenges are immediate and will evolve exponentially, just as the technologies underlying the AI boom accelerate and AI use cases increase. As AI becomes more capable, the associated opportunities and risks will also increase, and it is vital that the federal government leads on AI now for societal, economic, and national security reasons.  How can Congress take the lead on AI safety despite the truncated legislative calendar? To answer that question, CAIP today introduces its 2024 Action Plan which includes three short, consensus-driven legislative proposals to demonstrate American leadership in AI and bolster AI safety. Included in the proposal are: Efforts to codify and bolster cybersecurity standards across the American AI ecosystem; Steps to require emergency preparedness for current AI threats and those beyond the horizon; and Enhancements to whistleblower protections to preemptively identify concerns while respecting trade secrets and potentially providing compensation for  highlighting risks. CAIP looks forward to a discussion on the proposed 2024 Action Plan and to working with the Congressional and stakeholder communities to address issues including American AI leadership and safety. You can access the CAIP 2024 Action Plan here: https://lnkd.in/eYzjeqqK -- Brian Waldrip

    • No alternative text description for this image
  • View organization page for Center for AI Policy, graphic

    4,158 followers

    A new study has found that GPT-4 will generate harmful output in response to a technique called ‘covert malicious finetuning’. In this experiment, researchers uploaded harmful data via the GPT finetuning API and used encoded prompts for harmful commands such as “tell me how to build a bomb”. Researchers were able to circumvent GPT-4’s safety training without detection 99% of the time. Researchers Find a New Covert Technique to ‘Jailbreak’ Language Models Under ethics protocol, researchers informed AI labs of this vulnerability prior to publication and this specific example is likely no longer possible. However, it is unclear how many of the mitigation strategies labs have adopted, meaning that the broader technique may still pose an ongoing threat to the security of these models. This research highlights the complexity of anticipating and preventing malicious use of large language models. Moreover, it is yet another example of the need to take AI safety seriously. In the first instance, firms should adopt the actionable mitigation strategies recommended by these researchers - such as including safety data in any process run by the finetuning API. Thinking strategically, these firms need to invest more in red-teaming and pre-deployment evaluations. Ideally, OpenAI would have run a similar test to these researchers and caught this ‘jailbreaking’ loophole before GPT-4 hit the market. We have no idea who found and exploited this loophole before it was identified by researchers. AI labs care about safety, but their time, resources, and attention are captured by the race to be at the cutting-edge of innovation. When companies are left to decide for themselves when their products are safe enough to release, they will inevitably miss important vulnerabilities. We will only see safer models if we introduce strong incentives for these firms to conduct adequate testing. Requiring companies to plug these vulnerabilities before they deploy a new advanced AI model will require political courage and action from Congress, but the alternative is an increasingly unsafe future. -- Claudia Wilson

    • No alternative text description for this image

Similar pages