AI

Women in AI: Anika Collier Navaroli is working to shift the power imbalance

Comment

Image Credits: Anika Collier Navaroli / Bryce Durbin / TechCrunch

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution.

Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Technology Public Voices Fellow with the OpEd Project, held in collaboration with the MacArthur Foundation.

She is known for her research and advocacy work within technology. Previously, she worked as a race and technology practitioner fellow at the Stanford Center on Philanthropy and Civil Society. Before this, she led Trust & Safety at Twitch and Twitter. Navaroli is perhaps best known for her congressional testimony about Twitter, where she spoke about the ignored warnings of impending violence on social media that prefaced what would become the January 6 Capitol attack.

Briefly, how did you get your start in AI? What attracted you to the field? 

About 20 years ago, I was working as a copy clerk in the newsroom of my hometown paper during the summer when it went digital. Back then, I was an undergrad studying journalism. Social media sites like Facebook were sweeping over my campus, and I became obsessed with trying to understand how laws built on the printing press would evolve with emerging technologies. That curiosity led me through law school, where I migrated to Twitter, studied media law and policy, and I watched the Arab Spring and Occupy Wall Street movements play out. I put it all together and wrote my master’s thesis about how new technology was transforming the way information flowed and how society exercised freedom of expression.

I worked at a couple law firms after graduation and then found my way to Data & Society Research Institute leading the new think tank’s research on what was then called “big data,” civil rights, and fairness. My work there looked at how early AI systems like facial recognition software, predictive policing tools, and criminal justice risk assessment algorithms were replicating bias and creating unintended consequences that impacted marginalized communities. I then went on to work at Color of Change and lead the first civil rights audit of a tech company, develop the organization’s playbook for tech accountability campaigns, and advocate for tech policy changes to governments and regulators. From there, I became a senior policy official inside Trust & Safety teams at Twitter and Twitch. 

What work are you most proud of in the AI field?

I am the most proud of my work inside of technology companies using policy to practically shift the balance of power and correct bias within culture and knowledge-producing algorithmic systems. At Twitter, I ran a couple campaigns to verify individuals who shockingly had been previously excluded from the exclusive verification process, including Black women, people of color, and queer folks. This also included leading AI scholars like Safiya Noble, Alondra Nelson, Timnit Gebru, and Meredith Broussard. This was in 2020 when Twitter was still Twitter. Back then, verification meant that your name and content became a part of Twitter’s core algorithm because tweets from verified accounts were injected into recommendations, search results, home timelines, and contributed toward the creation of trends. So working to verify new people with different perspectives on AI fundamentally shifted whose voices were given authority as thought leaders and elevated new ideas into the public conversation during some really critical moments. 

I’m also very proud of the research I conducted at Stanford that came together as Black in Moderation. When I was working inside of tech companies, I also noticed that no one was really writing or talking about the experiences that I was having every day as a Black person working in Trust & Safety. So when I left the industry and went back into academia, I decided to speak with Black tech workers and bring to light their stories. The research ended up being the first of its kind and has spurred so many new and important conversations about the experiences of tech employees with marginalized identities. 

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

As a Black queer woman, navigating male-dominated spaces and spaces where I am othered has been a part of my entire life journey. Within tech and AI, I think the most challenging aspect has been what I call in my research “compelled identity labor.” I coined the term to describe frequent situations where employees with marginalized identities are treated as the voices and/or representatives of entire communities who share their identities. 

Because of the high stakes that come with developing new technology like AI, that labor can sometimes feel almost impossible to escape. I had to learn to set very specific boundaries for myself about what issues I was willing to engage with and when. 

What are some of the most pressing issues facing AI as it evolves?

According to investigative reporting, current generative AI models have gobbled up all the data on the internet and will soon run out of available data to devour. So the largest AI companies in the world are turning to synthetic data, or information generated by AI itself, rather than humans, to continue to train their systems. 

The idea took me down a rabbit hole. So, I recently wrote an Op-Ed arguing that I think this use of synthetic data as training data is one of the most pressing ethical issues facing new AI development. Generative AI systems have already shown that based on their original training data, their output is to replicate bias and create false information. So the pathway of training new systems with synthetic data would mean constantly feeding biased and inaccurate outputs back into the system as new training data. I described this as potentially devolving into a feedback loop to hell.

Since I wrote the piece, Mark Zuckerberg lauded that Meta’s updated Llama 3 chatbot was partially powered by synthetic data and was the “most intelligent” generative AI product on the market.

What are some issues AI users should be aware of?

AI is such an omnipresent part of our present lives, from spellcheck and social media feeds to chatbots and image generators. In many ways, society has become the guinea pig for the experiments of this new, untested technology. But AI users shouldn’t feel powerless.  

I’ve been arguing that technology advocates should come together and organize AI users to call for a People Pause on AI. I think that the Writers Guild of America has shown that with organization, collective action, and patient resolve, people can come together to create meaningful boundaries for the use of AI technologies. I also believe that if we pause now to fix the mistakes of the past and create new ethical guidelines and regulation, AI doesn’t have to become an existential threat to our futures. 

What is the best way to responsibly build AI?

My experience working inside of tech companies showed me how much it matters who is in the room writing policies, presenting arguments, and making decisions. My pathway also showed me that I developed the skills I needed to succeed within the technology industry by starting in journalism school. I’m now back working at Columbia Journalism School and I am interested in training up the next generation of people who will do the work of technology accountability and responsibly developing AI both inside of tech companies and as external watchdogs. 

I think [journalism] school gives people such unique training in interrogating information, seeking truth, considering multiple viewpoints, creating logical arguments, and distilling facts and reality from opinion and misinformation. I believe that’s a solid foundation for the people who will be responsible for writing the rules for what the next iterations of AI can and cannot do. And I’m looking forward to creating a more paved pathway for those who come next. 

I also believe that in addition to skilled Trust & Safety workers, the AI industry needs external regulation. In the U.S., I argue that this should come in the form of a new agency to regulate American technology companies with the power to establish and enforce baseline safety and privacy standards. I’d also like to continue to work to connect current and future regulators with former tech workers who can help those in power ask the right questions and create new nuanced and practical solutions. 

More TechCrunch

Dark patterns refer to a range of design techniques that can subtly encourage users to take some sort of action or put their privacy at risk.

FTC study finds ‘dark patterns’ used by a majority of subscription apps and websites

Elon Musk faces several lawsuits for firing more than 6,000 Twitter employees, including then-CEO Parag Agrawal, following Musk’s 2022 takeover of the social media platform. On Tuesday, Musk defeated one…

Elon Musk does not owe ex-Twitter staffers $500 million in severance, court rules

Meta announced on Wednesday that users aged 10 to 12 will soon be able to interact with others in VR if they have their parents’ approval to do so. Up…

Meta will soon let kids aged 10 to 12 interact with others in VR with their parents’ approval

Generative AI is everywhere these days, but Amazon Web Services has been perceived in some circles as being late to the game. In reality it’s still early, and the market…

AWS App Studio promises to generate enterprise apps from a written prompt

Cybersecurity experts are criticizing Microsoft for data breach notification emails that are confusing customers.

Microsoft emails that warned customers of Russian hacks criticized for looking like spam and phishing

After securing $14 million for its second fund in 2023, early-stage VC firm Kearny Jackson is back with a third fund.

Marc Andreessen, Sequoia again back Kearny Jackson, this time in $65M Fund III

The question now is whether Spotify will add something similar for music artists in the future.

Spotify is no longer just a streaming app, it’s a social network

The core issue relates to a 2019 licensing change whereby Microsoft made it more expensive to run Microsoft’s enterprise software on rival cloud services.

Microsoft settles with European cloud trade body over antitrust complaints

Featured Article

From Facebook to the face of crypto: Inside Anthony Pompliano’s wild career

He’s known by a single-syllable name: Pomp. But his story is of an unconventional rise to success that almost ended two years after it began.

From Facebook to the face of crypto: Inside Anthony Pompliano’s wild career

As TikTok continues to test the waters with longer videos, Instagram Head Adam Mosseri has said the Meta-owned social network will continue to focus on short-form content. In an Instagram…

While TikTok chases YouTube, Instagram vows to focus on short-form content

Are you a Series A to B startup aiming to make a big splash in the tech world? Look no further than the ScaleUp Startups Exhibitor Program at TechCrunch Disrupt…

Elevate your startup with the ScaleUp Program at TechCrunch Disrupt 2024

While Samsung has maintained its own familiar design with the standard Galaxy Buds 3, the Pro are experiencing a sort of Apple identity crisis.

Samsung unveils Galaxy Buds 3 Pro and Buds 3, available for preorder now and shipping July 24

At Unpacked 2024, the company shared more details about the Galaxy Ring, which represents the first take on the category from a hardware giant.

Samsung’s Galaxy Ring, its first smart ring, arrives July 24 for $399

At the heart of the features is the Snapdragon 8 Gen 3, which is the same system on a chip that powered the Galaxy S24.

Samsung Galaxy Z Fold and Z Flip 6 arrive with Galaxy AI and Google Gemini

Vimeo joins TikTok, YouTube and Meta in implementing a way for creators to label AI-generated content. The video hosting service announced on Wednesday that creators must now disclose to viewers…

Vimeo joins YouTube and TikTok in launching new AI content labels

The search giant is updating its Gemini for Android app to be more suitable for foldables with the ability to use Gemini with overlay and split screen interfaces.

Google brings new Gemini features and Wear OS 5 to Samsung devices

The European Union has designated adult content website XNXX as subject to the strictest level of content regulation under the bloc’s Digital Services Act (DSA) after it notified the bloc…

XNXX joins handful of adult sites subject to EU’s strictest content moderation rules

This likely rules out reports of Apple gaining an observer seat.

As Microsoft leaves its observer seat, OpenAI says it won’t have any more observers

SaaS founders trying to figure out what it takes to raise their next round can refer to Point Nine’s famous yearly SaaS Funding Napkin. (The term refers to “back of…

Deep tech startups with very technical CEOs raise larger rounds, research finds

Iceland’s startup scene is punching above its weight. That’s perhaps in part because it kept the 2021 hype in check, but mostly because its tech ecosystem is coming of age.…

Iceland is dodging the VC doldrums as Frumtak Ventures lands $87M for its fourth fund

Index Ventures is announcing $2.3 billion in new funds to finance the next generation of tech startups globally. These new funds are spread across different stages with $800 million dedicated…

Index Ventures raises $2.3B for new venture and growth funds

Prompt engineering became a hot job last year in the AI industry, but it seems Anthropic is now developing tools to at least partially automate it. Anthropic released several new…

Anthropic’s Claude adds a prompt playground to quickly improve your AI apps

Hebbia, a startup that uses generative AI to search large documents and respond to large questions, has raised a $130 million Series B at a roughly $700 million valuation led…

AI startup Hebbia raised $130M at a $700M valuation on $13 million of profitable revenue

NovoNutrients has raised a $18 million Series A round from investors to build a pilot-scale facility to prove that its fermentation process works at scale.

NovoNutrients tweaks its bugs to turn CO2 into protein for people and pets

Seven years ago, Uber and Lyft blocked an effort to require ride-hailing app drivers to get fingerprinted in California. But by launching Uber for Teens earlier this year, the company…

Uber for Teens has reignited an old debate over fingerprinting drivers

Fast-food chain Whataburger’s app has gone viral in the wake of Hurricane Beryl, which left around 1.8 million utility customers in Houston, Texas without power. Hundreds of thousands of those…

Whataburger app becomes unlikely power outage map after Houston hurricane

Bumble’s new reporting option arrives at a time when, unfortunately, AI-generated photos on dating apps are common

Bumble users can now report profiles that use AI-generated photos

The concept of Airchat is fun, especially if you’re someone who loves to send voice memos instead of typing out long paragraphs on your phone keyboard.

Talky social app Airchat gets a major overhaul, making it more like an asynchronous Clubhouse

Featured Article

The fall of EV startup Fisker: A comprehensive timeline

Here is a timeline of the events that led fledgling automaker Fisker to file for bankruptcy.

The fall of EV startup Fisker: A comprehensive timeline

Ahead of these potential competitors comes Openvibe, a simple aggregator for the open social web.

Openvibe combines Mastodon, Bluesky and Nostr into one social app