We believe in a balance of interests that benefits everyone and allows for a fair and open internet. That’s why we build technologies that create a balance: between people and profit, between content and commercials, between companies, content creators and consumers. Discover more at our website at https://lnkd.in/eEzVYzpa #userexperience #adfiltering #privacy
eyeo’s Post
More Relevant Posts
-
🔒 Likes Are Now Private on X! 🔒 ✅ Privacy First: Only you can see the posts you've Liked. Engage freely without worrying about public judgment or backlash. ✅ More Freedom: Like any content, even the "edgy" stuff, without fear of exposure. ✅ Algorithm Boost: Liking more content will help tailor your feed to your interests. (This change could also hint at new monetization strategies, particularly for adult content creators!) #SocialMedia #Privacy #XUpdate #LikesArePrivate #SMM #SocialMediaMarketing #X #SocialMediaUpdates
To view or add a comment, sign in
-
This week's content marketing term is 'Privacy', more specifically, online privacy. Every day, marketers are building relationships with potential clients, by posting about themselves, their work and sometimes, their families. This is great for allowing your audience to feel a connection with your brand which they will hopefully, buy into. However, how much should a brand or marketer, allow followers to know about their lives? As a content creator, are you cautious about sharing personal information? Or, are you an open book and allow anyone to know anything about yourself? What's your take on privacy online? #privacy #onlineprivacy #contentcreation #onlinecreator #knowinglyselect
To view or add a comment, sign in
-
🚀 Breaking News in Tech! 🚀 Did you hear about the latest ruling that's got the tech world buzzing? Civic and tech groups are claiming victory, saying it's a win for free speech when it comes to social media moderation. 🗣️💻 Here's the scoop: - The Supreme Court is serving up some legal clarity on content moderation. ⚖️✅ - It's not just about posting cat memes anymore – there are some serious legal implications at play. 🐱👤💼 - So what does this mean for the future of social media and free speech online? 🌐🔒 🔮 Prediction Time: - Get ready for some big changes ahead in how social media platforms manage content. 🔄📱 - Will this ruling set a new precedent for the tech industry? 🤔💡 - Stay tuned as we navigate the wild world of online moderation! 🚧🔍 💡 My Take: As an IT pro 🔧 and cybersecurity enthusiast 🛡️, I can't help but wonder how this ruling will impact the way we interact online. It's like the Wild West out there, but with a few more legal boundaries! 🤠👮♂️ Let's keep an eye on how this shakes up the tech landscape in the days ahead. ⏳💥 What do you think about this legal showdown? Drop your thoughts below and let's dive into this together! 💬💭 #ainews #automatorsolutions #TechTrends #FreeSpeech #LegalTech 🌐🔗 #CyberSecurityAINews ----- Original Publish Date: 2024-07-02 16:48
To view or add a comment, sign in
-
Today's Data, Analytics, Protection & Privacy Privacy in AI Governments play a pivotal role in shaping policies guiding AI’s responsible use, particularly... Read More: https://lnkd.in/dGHBHmFp #privacy #artificialintelligence #digitaltechnologies #digitalinnovation
Restrict Content
https://digitalgovernmentcentral.com
To view or add a comment, sign in
-
ICYMI: From Freestar CEO Kurt Donnell's take on Industry Challenges with OAREX to Google's answers on the Privacy Sandbox, here's what happened in Ad Tech this week. #google #privacy #privacysandbox #icymi #adtech #tech #advertising #industrynews #dsp #ssp #markettrends
ICYMI: Ad Tech Edition | Week of October 9, 2023
https://freestar.com
To view or add a comment, sign in
-
Digital Services Act (DSA) and content moderation … Platforms need to do better? - I think yes. Let me explain: The DSA’s core content-moderation rule is Art. 16 DSA: Platforms need to put in place user-friendly mechanisms for reporting allegedly illegal content. If you report accordingly "strong" DSA-notices, you trigger (strong) obligations under the DSA. In my view, many platforms do not make it user-friendly enough for users to send such (strong) DSA-notices. It seems many platforms might make it unnecessarily complicated and deterring to send Art. 16-notices. As a result, reporters might semi-voluntarily give up such reporting and instead decide to use the - weaker & less regulated, but easier to use - reporting mechanisms for flagging Community Standards violations. One could describe this whole setting as a “follow me to unregulated waters!” - design. Last week, I published a blog-post analyzing this in more detail. https://lnkd.in/dr2TyvBA There, Tiktok served as just an example. If you search, you might find shortcomings more or less amongst many platforms. E.g., here on LinkedIn: Try to send an Art. 16-notice alleging illegal content: You will be puzzled and left with questions: A reporter might select “infringement”, but when you do so, you are left with the options Trademark or Copyright infringement - which seemingly leaves you without options to notify other infringements as illegal content? E.g., X/Twitter: You will be asked to manually type unnecessary details, which might easily make you give up, your name, Email-address, handle of reported account! All this seems unnecessarily user-UN-friendly (name and email they shall have since registration, and the handle of the reported account they have because you clicked to report exactly at the respective content). All these just examples. Obviously, there is a lot of food for thoughts and discussions for civil society. There fore, I am glad that most ambitious NGOs in this field are already looking into it, especially #HateAid, which has launched broad investigations into reporting mechanisms of all major platforms and also already submitted a formal complaint: https://lnkd.in/dqQNWAYx But in the end, it is the regulators who should start investigating. The European Commission started a proceeding against Meta which might include the topics touched on (details not public): https://lnkd.in/dBgYwvT2 As a general matter, it is Irisih Coimisiún na Meán who is in charge, but so far no sign of action. They should: Insufficient reporting mechanisms affect millions of users in an important field. And the possible shortcomings there seem low hanging fruits for a regulator!
Follow Me to Unregulated Waters!
https://verfassungsblog.de
To view or add a comment, sign in
-
Make no mistake, this is unknown territory for YouTube and all other platforms... When working with some of the biggest podcasts and top creators on their YouTube rights management strategies, a growing issue was deepfakes and other AI generated content that appeared as brand partnerships. While YouTube has a best-in-class product with Content ID, detecting this type of content is completely different than detecting normal piracy and there isn't an automated solution, which is something taken advantage by people wanting to profit off others. Simply put, those abusing it are ahead of the platforms in the same way that platforms have often been ahead of the government. While this is a very positive step by YouTube, the articles about this topic don't articulate the scale of abuse that is happening across all social video platforms. If you're in talent representation, this is an important read as at some point you'll be reaching out to YouTube about this. Phil Ranta posted about it broadly, but for any managers or agents that have dealt with this issue, feel free to share your experiences below 👇
YouTube will remove videos that include "realistic" genAI recreations of people's faces or voices--if they ask it to - Tubefilter
https://www.tubefilter.com
To view or add a comment, sign in
-
Lifetime Learner|Offering a non-social, tech-only knowledge base to help those in Information Technology stay current and learn about new solutions, risks, and opportunities.
Interesting news in SiliconANGLE & theCUBE "First, the Oversight Board points out that the policy only applies to videos in which people are depicted as saying words they did not say. Videos that depict people doing things they did not do, such as the altered seven-second clip of Biden, are not covered under the policy. Audio files are likewise exempt. The Oversight Board is recommending that Meta extend its manipulated content policy to cover such content." "Under the new rules, organizations will have to add disclosures if their ads depict a real person as saying or doing something they did not say or do" What are the odds that organizations will add the disclosures, and if they do, who will notice? Pia T. Debbie Reynolds Meta #socialmedia #privacy #deepfake #contentsecurity https://lnkd.in/gwEZ9XCJ
Oversight Board calls for changes to Meta’s manipulated-content policy - SiliconANGLE
siliconangle.com
To view or add a comment, sign in
-
#YouTube Introduces New Policy to Remove #AI-Generated Content Mimicking Your Face or Voice. #YouTube has updated its policy to allow individuals to request the takedown of #AI-generated or synthetic content that simulates their face or voice, framing such requests as privacy violations. This change, introduced in June, is part of YouTube’s broader responsible AI agenda, initially rolled out in November. Under the new policy, affected individuals can directly request content removal through YouTube’s privacy request process. However, the platform retains the discretion to evaluate complaints based on various factors, including whether the content is labeled as synthetic, uniquely identifies a person, or qualifies as parody or satire. YouTube will also consider if the AI-generated content features public figures or depicts sensitive behavior, such as criminal activity or political endorsements. The updated policy underscores that simply labeling #AI-generated content does not exempt it from potential removal if it violates YouTube’s Community Guidelines. Additionally, #YouTube will give content creators a 48-hour window to address privacy complaints, either by removing the content or blurring faces, before initiating a review. Although privacy complaints won't result in Community Guidelines strikes, YouTube may act against accounts with repeated privacy violations. This nuanced approach aims to balance the protection of individuals’ privacy with the creative use of AI on the platform. #YouTubePolicy #Provelopers #AIGeneratedContent #PrivacyProtection #SyntheticMedia #ResponsibleAI #ContentModeration #YouTubeUpdates #AIRegulations
To view or add a comment, sign in
7,112 followers