We are committed to ensuring the safe use of our leading audio AI technology.

AI audio helps overcome language and communication barriers, paving the way for a more connected, creative, and productive world. It can also attract bad actors. Our mission is to build and deploy the best audio AI products while continuously improving safeguards to prevent their misuse.

AI safety is inseparable from innovation at ElevenLabs. Ensuring our systems are developed, deployed, and used safely remains at the core of our strategy.”

Mati Staniszewski

Co-founder at ElevenLabs

Our mission is to make content accessible in any language and in any voice.

We are a trusted AI audio provider for millions of users around the world, as well as for leading publishing and media companies including:

The Washington Post Logo

ElevenLabs safety in practice

We are guided by three principles to manage risk while ensuring AI audio benefits people worldwide: moderation, accountability, and provenance.

Moderation

We actively monitor content generated with our technology.

Automated moderation. Our automated systems scan content for violations of our policies, blocking them outright or flagging them for review.

Human moderation. A growing team of moderators reviews flagged content and helps us ensure that our policies are adopted consistently.

No-go voices. While our policies prohibit impersonations, we use an additional safety tool to detect and prevent the creation of content with voices deemed especially high-risk.

voiceCAPTCHA. We developed a proprietary voice verification technology to minimize unauthorized use of voice cloning tools, which ensures that users of our high-fidelity voice cloning tool can only clone their own voice.

Accountability

We believe misuse must have consequences.

Traceability. When a bad actor misuses our tools, we want to know who they are. Our systems let us trace generated content back to the originating accounts and our voice cloning tools are only available to users who verified their accounts with billing details.

Bans. We want bad actors to know that they have no place on our platform. We permanently ban users who violate our policies.

Partnering with law enforcement. We will cooperate with the authorities, and in appropriate cases, report or disclose information about illegal content or activity.

Provenance

We believe that you should know if audio is AI-generated.

AI Speech Classifier. We developed a highly accurate detection tool which maintains 99% precision and 80% recall if the sample wasn't modified and lets anyone check if an audio file could have been generated with our technology.

AI Detection Standards. We believe that downstream AI detection tools, such as metadata, watermarks, and fingerprinting solutions, are essential. We support the widespread adoption of industry standards for provenance through C2PA.

Collaboration. We invite fellow AI companies, academia, and policymakers to work together on developing industry-wide methods for AI content detection. We are part of the Content Authenticity Initiative, and partner with content distributors and civil society to establish AI content transparency. We also support governmental efforts on AI safety, and are a member of the U.S. National Institute of Standards and Technology’s (NIST) AI Safety Institute Consortium.

The volume of AI-generated content will keep growing. We want to provide the needed transparency, helping verify the origins of digital content.”

Piotr Dąbkowski

Co-founder at ElevenLabs

Special focus: elections in 2024

Half of the world will vote in 2024. To prepare for this year’s elections, we are focused on advancing the safe and fair use of AI voices.

To facilitate these efforts, we are an inaugural signatory to the Tech Accord on Election Safety which brings together industry leaders such as Amazon, Google, Meta, Microsoft, OpenAI, and ElevenLabs, among others, in a concerted effort to safeguard global elections from AI misuse.

As AI becomes part of our daily lives, we are committed to building trustworthy products and collaborating with partners on developing safeguards against their misuse.”

Aleksandra Pedraszewska

AI Safety at ElevenLabs

Read more about our safety efforts

Compliance

ElevenLabs

Create with the highest quality AI Audio

Get started free

Already have an account? Log in