Timnit Gebru’s Post

View profile for Timnit Gebru, graphic

Founder & Executive Director at The Distributed AI Research Institute (DAIR)

One of these days I'm gonna write a blog post about the origin of the term "AI safety" as a sexist and racist way to 1) differentiate what we do as NOT being about "safety" (since environmental impacts, worker exploitation, pollution of the information ecosystem etc aren't about "safety" apparently, because the moment you start talking about power, who has it and who doesn't, who exploits whom, and name those dynamics, it ceases to be about safety. Interesting 🤔 2) again differentiate us little "ethics" people (I've never taken an ethics course in my life so not sure how we all started getting bucketed there actually) from the whole "AGI" "existential risk" thing which I've said enough about. To be clear, now that people have this AI safety vs AI ethics thing, I don't ever wanna be associated with whatever the "AI safety" people do, but that's because of what they've made the terminology refer to, not because I believe that my work isn't related to safety. Yes. It seems that the many women who have warned about so much stuff weren't doing anything related to "safety." A bunch of white men needed to come along and start working on "safety" to alert all of us. Remember how OpenAI was described as an "AI safety" company that will save us from Google? And now people are talking about Antrhopic as an "AI safety" company that will save us from OpenAI? They are such good grifters, I'll give them that. And lastly, the people who are actually on the front lines of "AI safety" are the data workers highlighted in our Data Workers Inquiry (https://data-workers.org/).

🎤 Kai Dupe, Ed.D. (he/him)

I help organizations leverage computing to improve productivity.

2w

I think maybe you just did. :-)

Morten Rand-Hendriksen

Making sense of AI, tech, and society

2w

Please write it if you can find the time. It's such an important part of the conversation. Coming from the field of ethics, I am baffled by how the term "safety" is being used in these contexts and how unaware these people who have appointed themselves to be the designers our future with AI are of the real-world consequences of what they are doing (and their refusal to admit their endeavour is a political one as much as it is a technological one).

Theo Cosmora

Sentient, UN Award Recipient, Inventor, Vegan 🌱

2w

Course on Ethics be like... Q: Really, I shouldn't lie? A: Er, No! Q: Really, I shouldn't steal? A: Er, No! Q: Really, I shouldn't abuse? A: Er, No! Q: Buh, business is business, right? A: Er,..........! Most intelligent species in the Universe creating 'artificial intelligence'. Happy Days.

Obsidian A.

Bridging the gap between commercial & creative

2w

This is deeply revealing and thank you. I was applying to equity in AI & ethical AI roles at one point & I can now understand why I would have been flatly rejected / ignored 🤣. It’s a shame how many organisations pretend as if they want progression, when really what they want are gatekeepers of the status quo. Thanks for making sure they can never hide what’s under the table by shaking the life out of it.

Rebecca Hawkins

MATS Scholar | AI governance and safety

2w

Where is the sexism?

Gillian Marcelle, PhD

CEO and Founder, Resilience Capital Ventures LLC

2w

We will not allow revisionist history of AI to take hold Timnit Gebru

Kabir Kumar

Co-Founder & Director at AI-Plans.com

2w

I prefer 'AI alignment' to 'AI safety'.

Safety is literally the lowest rung on Maslow’s Hierarchy so AI safety literally translates to doing the bare minimum.

What about sidestepping the current algorithm-driven "push model" where platforms dictate content visibility, to a user-empowered "pull model", epitomised by tech such as RSS (which has been around since the 1990s)? This would be one of the keystone themes of a "Blueprint for digital secession". 🤔

See more comments

To view or add a comment, sign in

Explore topics