Digital Engagement Intern @ Calix | Science & Tech Writer @ USC Information Sciences Institute | MS Digital Social Media @ University of Southern California | Salesforce, WordPress, Google Analytics 4
Coming across a misleading photo, an incendiary Tweet or a comment chain full of bots and trolls is very common whenever we're on the Internet. Especially when "doomscrolling", it's quite easy for us to overlook misinformation on social media, or be influenced by it.
To combat this, USC Information Sciences Institute researchers have developed machine learning models to identify coordinated inauthentic behaviors by accounts on X involved in influence campaigns around the world.
Check out my article about this: https://bit.ly/3QNOOa6#research#socialmedia#machinelearning#AI#misinformation#influencecampaigns#journalism#sciencewriting
Fortune's Path kicks off the week with excerpts from a series of AI-focused conversations with our resident AI genius Sharon C..
Me:
��♂️ How do we spot deep fakes?
Sharon:
It’s hard, but you can start with asking yourself:
Was it previously reputable, with say, other information they presented? Did multiple sources generally say similar things about the topic? Then it is more likely to be true.
👊 However, something written in a way that is designed to be emotionally charged, that tries to get you to be very angry or very fearful is likely not… I generally look for emotionally charged content that’s written in a way that is not conducive to intelligent and rational debate.
Me:
Let’s say you’re working someone new to AI's role in deep fakes and misinformation: What do you tell them about deep fakes?
Sharon:
They have to be made aware that these technologies can be used for harm…have them imagine their favorite online stars and influencers. Can they tell whether or not they are images that have been manipulated?
🚦 Watch for certain signs...what seems to deliberately rile up might be in fact, not real. That should be communicated whether or not they get it at that moment.
👀 One online practice I’ve noticed is people making up quotes or mis-attributing quotes. One might make up a quote - something that fits your argument - and then say Einstein or Benjamin Franklin said this when they never did. Or you’ll take something that no one actually knows who said it, and then attribute it to someone. There’s not much harm in that, other than making us lazy thinkers. But it’s not on the same level of harm as doing a deep fake video of a classmate saying something racist.
___________________________________
😴 Here's to not being lazy thinkers.
We'll talk about how the same kind of approach relates to AI algorithms in our next post.
https://lnkd.in/gctyF9rX
'In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.'
https://lnkd.in/gmT_8yRz
Who remembers #Gato - single generalist agent from #GoogleDeepMind. Paper from '2022 - Influencers consider this pre GenAI era. Is it?
'Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens'
https://lnkd.in/dEDJtayp#google#ai#agents
A recent study by researchers at the University of Bristol explores the potential for generative AI to enable highly effective targeted political campaigns. The study builds on previous research that shows personal attributes can be inferred from online information and that microtargeting tailored messages to individuals can be more persuasive. Generative AI has the potential to make microtargeting scalable by customizing political messaging and validating its appeal to different segments of the audience. The study conducted experiments using real political ads and found that generative AI models could identify persuasive messages for individuals with different personality traits. The authors argue that this ability poses a threat to fair elections, as microtargeted content can be untruthful or manipulated. They suggest the development of a predictive model to alert users when they are viewing microtargeted campaign content. The impact of microtargeted ads on future elections remains to be seen, but the study highlights the need for regulators and policymakers to consider these findings.
#GenerativeAI#Microtargeting#PoliticalCampaigns
Here's how I cracked the code on finding the highest performing UGC content for the hashtag #AbortionRights on Instagram:
- Signed up to https://siftsocial.ai (DM me for a free 1000 tokens)
- Scraped the top 25 posts from the hashtag
- Asked siftie to find the post with the most combined comments and likes
Here's the post: https://lnkd.in/gG6n2-ss
Let me know what y'all think and DM me if you want some free tokens to play around with Siftie, the world's first conversational AI social media analyst.
Discover the power of segmenting your audience with Synthetic Users!
Their solution analyses vast amounts of data to create realistic profiles that reflect the characteristics and behaviours of a target audience – attributes such as age, gender, occupation, interests, attitudes and more.
Watch Synthetic Users demo as they show how to target specific user groups and gather insights from each segment.
#SyntheticUsers#marketresearch#syntheticrespondents#ai#LLM#personas#generativeai#artificialrespondents#userresearch#feedback
How are you using AI in your social media? Are you generating image or video content purely from AI?
If you are and are using on Meta take note of their new terms and conditions! You now need to declare you've used an AI generated image by labelling your post! The T&C's are easy to locate when you look to post so it's worth having a read!
Engineer🧰➡️Real-Estate Pro| MultiFamily Syndicator🏘| Wealth Strategist💰| Traveller✈️| Reader📚| Ex-Qualcomm
2momisinformation online demands vigilance. machine learning models offer hope. Bernice Chan