“As the multimodal datasets that power generative AI models grow larger, they are disproportionately more likely to have deeply harmful impacts, like dehumanizing and criminalizing Black and brown individuals.” New Research: Scaling Generative AI Datasets Disproportionately Scales Racist Outputs, Especially Against Black Men
Connected Learning Lab’s Post
More Relevant Posts
-
As the UK AI Safety Summit convenes this week, Professor Michael Barrett, Professor of Information Systems & Innovation Studies at Cambridge Judge Business School, sets out 9 findings on the risks and future of AI. The rise of generative artificial intelligence (AI), in which machine learning goes beyond drawing conclusions from large datasets to producing new content from existing data, has been swift - particularly following the popularity of #chatgpt after it became available in late 2022. While there is optimism about the potential benefits of AI in medicine and other fields, it has also led to what Michael believes is a polarised debate that includes a range of “utopian and dystopian imaginations”. Read the full article 👇 #artificialintelligence #futureofai
The dark side of AI: algorithmic bias and global inequality - News & insight - Cambridge Judge Business School
jbs.cam.ac.uk
To view or add a comment, sign in
-
Top 100 Women in AI Ethics™ 2024 | Strengthening capacity for non-technical practitioners to sit at the AI table | Researcher | Consultant | Digital inclusion in International Development Expert
*AI for good tldr*: Both training dataset size and model size influence intersectional bias (e.g. who gets labeled a "human" and/or "criminal" and/or "gorilla"). And yikes--decreasing one form of bias can lead to increases in other biases! Thanks to the amazing Dr. Abeba Birhane, the research team, and Mozilla for the careful and nuanced take on dataset scaling, models, and racial bias. (First, what is dataset scaling? A big idea in #ai is that more data is always better and results in great prediction accuracy. This can be achieved by either, one, getting more new data into your dataset or, two, by transforming your existing data through augmentation, synthetic data, etc. Pssst: this article is about the first use of dataset scaling). 👩🏿🎓👩🏻🎓👩🏽🎓 What can we learn from this research? 👩🏿🎓👩🏻🎓👩🏽🎓 👉🏻 Scaling dataset size increases racial bias in larger visio-linguistic models, particularly against Black and Latino men. For example, when the dataset was scaled from 400M to 2B samples, the probability of a larger ViT-L model predicting an image of a Black man as a criminal increased by 65%. 👉🏻 Smaller models exhibit reduced bias with larger datasets, while larger models show increased bias. For example, for the smaller ViT-B models, the probability of predicting an image of a Latino man as a criminal decreased by 47% when the dataset size increased from 400M to 2B samples. 👉🏻 Biases in AI models reflect deeper historical and societal issues, emphasizing the need for careful dataset curation and auditing. For example, scary sentences in here with a low bar: "For all racial groups, the probability of an image of a human from the CFD being predicted as human being was higher in smaller models (ViT-B-16 and ViT-B-32). On the other hand, the probability of an image of a human from the CFD being predicted as human being decreased for Latino women, Black women, White women, and White men in the larger models (ViT-L-14)." (8) 💡 Interesting insight: 💡 Despite reduced non-human offensive labels (like "gorilla") with larger datasets, models still misclassify Black individuals with offensive human labels (like "criminal"). This indicates that while overt dehumanization decreases, harmful stereotypes persist in new forms. The shift underscores the ongoing challenges in addressing bias in AI training data. 🚜 How do we get to work? 🚜 ☑ Are your data scientists tracking these sociotechnical ways of auditing AI systems for social outcomes? Or are they heads down on the technical details? Help them do both. ☑ Audit your own models! This is especially important for the social impact sector where our goals for equality and equity are foregrounded instead of profit. ☑ Ask your team to demonstrate, pre-Go Live, model performance by intersectional social groups and audit results assessing both overt and covert biases in your context/use case.
Bigger isn’t always better, especially when it comes to scaling the datasets that train generative #AI models. In Mozilla Sr. Advisor Dr. Abeba Birhane’s latest research investigation “The Dark Side of Dataset Scaling,” she studies the rapid scaling of #AI datasets and its clear and outsized impact on Black and Latino people⤵️ https://mzl.la/3R2BSNB
New Research: Scaling Generative AI Datasets Disproportionately Scales Racist Outputs, Especially Against Black Men
foundation.mozilla.org
To view or add a comment, sign in
-
Fei-Fei Li, renowned as the 'Godmother of A.I.,' is an AI ethicist and advocate. She leads an AI lab at Stanford University and champions human-led AI. Li believes in applying AI to healthcare and has provided counsel to President Joe Biden. She achieved a major breakthrough with 'ImageNet,' which led to advancements in deep learning neural networks. Li also works to diversify the technology industry and is the author of the memoir 'The Worlds I See.' She acknowledges the potential for both destruction and inspiration within AI and emphasizes the responsibility of scientists, technology leaders, and educators. #AI #ArtificialIntelligence #Technology #Ethics #Innovation #ThoughtLeadership #AI #ArtificialIntelligence #Technology #Ethics #Innovation
"Godmother of A.I." Fei-Fei Li: "The power lies within people"
news.yahoo.com
To view or add a comment, sign in
-
I'm excited to share a groundbreaking advancement in AI interpretability that I recently came across. On May 21, 2024, Anthropic researchers successfully advanced in uncovering the intricate representation of millions of concepts within Claude Sonnet, a state-of-the-art and widely proliferated language model. This marks the first detailed examination of the internal workings of a production-grade AI model, a leap that could significantly enhance AI safety. Historically, AI models have been treated as enigmatic black boxes, generating responses without clear explanations. By applying "dictionary learning," researchers have now mapped the model's complex features, providing a conceptual blueprint of its inner state. In October 2023, initial successes with a smaller model revealed fascinating insights. Researchers have since scaled up their efforts, decoding features in Claude 3.0 Sonnet and identifying entities such as cities, notable figures, scientific fields, and more. These features span across languages and modalities. With many of OpenAI's alignment researchers leaving after safety concerns (and a new team being formed under Sam Altman?), this breakthrough gives me hope that a significant step is being taken to ensure that AI models are safe, honest, and unbiased. Source: https://lnkd.in/gZkvKpXC
To view or add a comment, sign in
-
Day 11 of the 100-day journey exploring AI in academia focuses on the impact of AI on academia and the expectations and desires surrounding artificial intelligence.We can draw insights from the overall theme of the journey. Based on the previous days' topics and the general exploration of AI in academia, Day 11 could potentially delve into the following points: The expectations and desires for artificial general intelligence (AGI) and its potential impact on academia. The evolving definition and goals of artificial intelligence in the academic context. The role of AI in shaping the future of education and research. The ethical considerations and challenges associated with the integration of AI in academia. The potential benefits and limitations of AI in addressing educational equity and access. By exploring these aspects, Day 11 could provide a deeper understanding of the broader implications and possibilities of AI in academia, fostering discussions and insights into the future of education and research.
To view or add a comment, sign in
-
Physicist and computer scientist Fei-Fei Li has released a memoir detailing her explorations in human-centered Artificial Intelligence. Here are some of her solutions to ethical AI #AI #ML #futurism #IntelligenceFactory #digitaltransformation #DX https://lnkd.in/gzKmPQUS
Trailblazing computer scientist Fei-Fei Li on human-centered AI : Short Wave
npr.org
To view or add a comment, sign in
-
The recent release of AlphaFold3 is a reminder of the importance of AI in scientific research. This report from The Royal Society is a comprehensive review of the methods and opportunities for using AI, and raises some important recommendations around accessibility and integrity. #AI #artificialintelligence https://lnkd.in/ejgDzyas
Science in the age of AI
royalsociety.org
To view or add a comment, sign in
-
A recent interview with computer scientist and AI pioneer, Fei-Fei Li, explores the current state and future trajectory of AI. Some of the key takeaways include: 🧠 Inflection Point: AI stands at a pivotal point driven by powerful models, awareness, and socioeconomic impact. 🤝 Human-Centered Approach: Grounding AI in human values, prioritizing well-being at all levels: individual, community, and society. 🎓 Education and Policy: Advocating for accurate education and thoughtful policies to shape AI responsibly. 🏛️ NAIRR Initiative: Resource the public sector for AI research, talent development, and public trust in AI tech. 💡 Ethical Responsibilities: Engineers and scientists have a duty to uphold ethical norms and educate about AI implications. Dive deeper into the interview below. #AI #HumanValues #FutureOfTech
For her pioneering work in computer vision and image recognition, Fei-Fei Li has been called the "godmother of AI." In a new interview for Issues in Science and Technology, Li discusses what it means to develop human-centered AI, the ethical responsibilities of AI scientists and developers, and whether there are limits to the human qualities AI can attain. Read at https://ow.ly/1GT950RcCXN. #ArtificialIntelligence #AI #DeepLearning #ComputerScience
“AI Is a Tool, and Its Values Are Human Values.”
https://issues.org
To view or add a comment, sign in
-
Executive Leadership | Business Transformation | Program and Project Management | Enterprise Architecture & Strategy | Change Management l Risk Advisory l Innovation
Interesting approach being taken by deep fake investigators and companies who are seeking to inculcate fair use of AI. I would be keen to learn how are financial institutions planning to deal with AI generated deep fake. Research papers, methodologies and approaches are more than welcome! #ai4good #responsibleai #deepfake https://lnkd.in/emNaCAyp
How to stop AI deepfakes from sinking society — and science
nature.com
To view or add a comment, sign in
-
Physicist and computer scientist Fei-Fei Li has released a memoir detailing her explorations in human-centered Artificial Intelligence. Here are some of her solutions to ethical AI #AI #ML #futurism #IntelligenceFactory #digitaltransformation #DX https://lnkd.in/gkTZ_7b4
Trailblazing computer scientist Fei-Fei Li on human-centered AI : Short Wave
npr.org
To view or add a comment, sign in
1,613 followers