Follow us on our other channels ⚫ X: https://lnkd.in/eWtf2r3Q 🔵 Bluesky: https://lnkd.in/etqJbiby ⚪ Mastodon: https://lnkd.in/e-Kh5egi Looking for CDT Europe? Find their channels here: ⚫ X: https://twitter.com/cdteu 🔵 Bluesky: https://lnkd.in/e_TT-gBY ⚪ Mastodon: https://lnkd.in/eJ7KA-9e
Center for Democracy & Technology
Public Policy Offices
Washington, District of Columbia 18,362 followers
Promoting democratic values by shaping technology policy and architecture, with a focus on the rights of the individual.
About us
The Center for Democracy & Technology is a 501(c)(3) working to promote democratic values by shaping technology policy and architecture, with a focus on the rights of the individual. CDT supports laws, corporate policies, and technological tools that protect privacy and security and enable free speech online. Based in Washington, D.C., and with a presence in Brussels, CDT works inclusively across sectors to find tangible solutions to today's most pressing technology policy challenges. Our team of experts includes lawyers, technologists, academics, and analysts, bringing diverse perspectives to all of our efforts. Learn more about our experts or the issues we cover: cdt.org/
- Website
-
http://cdt.org
External link for Center for Democracy & Technology
- Industry
- Public Policy Offices
- Company size
- 11-50 employees
- Headquarters
- Washington, District of Columbia
- Type
- Nonprofit
- Founded
- 1994
- Specialties
- Technology, Policy, and Civil liberties
Locations
-
Primary
1401 K St NW
Suite 200
Washington, District of Columbia 20005, US
-
Rue d’Arlon 25
B-1050
Brussels, Ixelles 1050, BE
Employees at Center for Democracy & Technology
Updates
-
🚨The government is trying to use secret evidence to close down TikTok. On Friday, the DOJ filed a brief in TikTok v. Garland, asking the court to seal info that forms the basis for its claim that TikTok threatens national security. https://lnkd.in/e7hQdVH2 It also informed TikTok that it would be using against it Internet conversations it had collected using FISA 702 warrantless surveillance.If there was ever a case for declassification in the public interest, this is it. Decisions about people's right to access and use their preferred speech platforms shouldn't be based on secret evidence. CDT recently joined an amicus brief led by EFF. The brief makes clear that #NationalSecurity interests do not diminish protections afforded by the #FirstAmendment & that courts must impose the same rigorous standards to laws that restrict speech. https://lnkd.in/eFnpAgvs The public interest in declassification is extremely high. The intelligence community should adhere to its own Principles of Intelligence Transparency. Americans deserve to know what arguments the government is making about their First Amendment rights. https://lnkd.in/eVUzxRmq
-
On Friday, the U.S. AI Safety Institute (AISI) released draft guidance for managing the risk that advanced foundation models (FMs) are deliberately misused to cause harm. We applaud AISI’s commitment to developing actionable guidance for FM developers. Draft guidance like this is an important step towards more safe and responsible AI development. We are encouraged that AISI’s recommendations cover the entire AI lifecycle, from developing proactive incident response plans, to conducting repeated assessments during development, to monitoring for misuse post-deployment. We also commend AISI’s emphasis on documentation as a strategy to manage risk, since documentation is one of the basic building blocks of sound risk management. https://lnkd.in/eX2fve5g The guidance focuses on what developers can do, but downstream deployers have an important role to play in managing misuse risks, too. To enable them to do so, AISI should recommend that developers give downstream deployers documentation and guidance on the fragility of FM safeguards. The draft correctly recognizes the importance of consulting external experts about misuse threats. But for harms like intimate image abuse, developers should also consult social scientists, advocacy groups, and impacted communities to inform risk identification, assessment, and mitigation. The draft emphasizes red-teaming for assessing models’ potential for misuse. Red-teaming is important, but it is not a panacea. AISI should emphasize that red-teaming is just one component of a comprehensive risk management strategy. To understand AI vulnerabilities, we also need independent auditors and researchers to investigate them. We’re pleased to see the draft guidance recommend that developers implement safe harbor policies to shield this crucial research from legal attacks. AISI will be accepting public comments on the draft guidance until September 9, 2024.
Best Practices in AI Documentation: The Imperative of Evidence from Practice
https://cdt.org
-
UN Delegates recently met in NYC to discuss the #GlobalDigitalCompact, aiming to affirm governments' central role in governing the internet. But the internet is a global public good, needing multi-stakeholder governance. CDT recently joined others in a letter “highlight[ing] the areas & aspects of greatest concern, incl. human rights & gender, support for the OHCHR, inclusive approaches to internet governance, consistency in terminology, & decentralization of power.” We strongly encourage Member States to preserve the strong & engaged voices of the #HumanRights community in delivering on issues of core importance in multi-stakeholder internet governance. https://lnkd.in/eiUuP-UT
CDT Joins Civil Society Joint Brief on the UN Global Digital Compact
https://cdt.org
-
MUST READ: New post by CDT’s Amy Winecoff & Miranda Bogen discuss an essential, yet often underappreciated , tool in AI governance - comprehensive documentation. Documentation enables practitioners to identify & address potential failure modes proactively. Documentation has benefits beyond transparency; it informs internal decisions about managing AI system risks , fostering ongoing improvements, & ensuring responsible development and deployment. To maximize documentation's potential, we need evidence about what works in practice. This includes insights from public & private-sector practitioners and empirical research. Usability, audience ambiguity & misaligned incentives are challenges that must be addressed for documentation to fully support robust AI governance. 🔗 Dive deeper into how comprehensive documentation can transform AI risk management and governance. Discover the importance of empirical evaluation & evidence-based practices in shaping effective AI documentation standards. https://lnkd.in/eX2fve5g
Best Practices in AI Documentation: The Imperative of Evidence from Practice
Center for Democracy & Technology on LinkedIn
-
Welcome to the July edition of Center for Democracy & Technology's newsletter. This month, dive into our latest insights on how legislation can address the risks posed by #AI in employment and learn more about our recent advocacy efforts!
July 2024 Newsletter
Center for Democracy & Technology on LinkedIn
-
Today is the 34th anniversary of the #AmericansWithDisabilitiesAct (ADA). This was a momentous occasion in the history of the #DisabilityRights movement, as the #ADA prohibited discrimination on the basis of #disability. In honor of the #ADA’s anniversary, check out CDT’s new report by Ariana Aboulafia, Miranda Bogen, & Bonnielin Swenor on how under-representative data on disability can lead to #AlgorithmicBias – and how to combat it. The report, “To Reduce Disability Bias in Technology, Start With Disability Data” provides recommendations on how to inclusively design algorithmic systems – which starts with ensuring data sets are inclusive of people with disabilities https://lnkd.in/dM6hNXcc CDT is proud of our work on #DisabilityRights in #TechPolicy, where we focus on combating tech-facilitated disability discrimination and mitigating the risks of technologies for people with disabilities. Read more here: https://lnkd.in/e7yTJtdh.
Report – To Reduce Disability Bias in Technology, Start With Disability Data
cdt.org
-
CDT's Senior Counsel for Competition Policy George Slover ft. in LA Times on the FTC’s recent investigation into “surveillance pricing” which uses algorithms & AI to offer personalized prices online based on a customer’s ability or willingness to pay. “[Slover said] ‘bespoke pricing’ amounts to an extreme reversal of a system that has worked for consumers since the advent of the price tag.” “Instead of sellers offering goods and services to anonymous buyers, [Slover] said, ‘the seller knows everything ab the buyer, & what they are likely, willing and able to pay’ — while keeping the buyer in the dark about what the seller is charging everyone else.” Slover: “It inverts, or you might say perverts, the assumptions at the very foundation of the justification for the free market.” https://lnkd.in/dRJ862yN
What is 'surveillance pricing,' and is it forcing some consumers to pay more? FTC investigates
latimes.com
-
The Senate Appropriations Committee approved the Fiscal Year 2025 CJS Appropriations Act. CDT thanks Senators Shaheen & Moran for their leadership ensuring that #antitrust enforcement gets full use of merger filing fee receipts & is not burdened with restrictive riders. https://lnkd.in/divmyxq8
BILL SUMMARY: Commerce, Justice, Science, and Related Agencies Fiscal Year 2025 Appropriations Bill
appropriations.senate.gov