📢 CAIDP Authors Letter in The New York Times on AI Training, Slack, and Opt-In (June 28, 2024) In a letter published today in The New York Times, CAIDP's Marc Rotenberg, Christabel R. and Kara Solange Kelawan wrote: "Re “A.I. Devices Want More of Our Data,” by Brian X. Chen (Tech Fix column, June 26): 💡 "Mr. Chen makes a good point about the privacy and security risks of companies collecting customer data to train artificial intelligence models. But the companies are not waiting for user permission to repurpose this data. They assume they can do this unless customers object." 🤔 "In a recent statement to Slack, we opted out of its plan to use our data and the data of our friends and colleagues who collaborate with us for training A.I. models. We cannot assume that they would want their content, created for our common projects, to be used by the company for other purposes. ➡ "We have also urged Slack to withdraw the proposed change in its business practices and adopt an opt-in model for the use of customer data for A.I. training. 🔥 "If some users wish to provide their data to Slack for A.I. training, that should be their choice. But it is simply unfair and deceptive for Slack to take user data without explicit permission, particularly after it announced, “You own and control the content within your Slack workspace.” 🔥 🔥 "We have also notified the Federal Trade Commission of our concerns." [In 2023, the Center for AI and Digital Policy filed a detailed complaint with the Federal Trade Commission, explaining that OpenAI had violated US consumer protection law when it released ChatGPT knowing of the various risks to public safety, democratic institutions, privacy, intellectual property, and cybersecurity.] #aigovernance #OurData https://lnkd.in/ecx-tCg2
Center for AI and Digital Policy’s Post
More Relevant Posts
-
Microsoft has unveiled Bing Chat Enterprise, a new AI-powered chat tool claiming to offer better data protection for businesses. Is it the privacy solution it promises to be? Dive into the details in our latest article. #microsoft #AI #bingchatenterprise
To view or add a comment, sign in
-
Google's AI Tools Prioritize User Privacy to Enhance Security #PrivacyMatters Follow us on Discord: https://lnkd.in/gt823Zd3 .. Summary: Google has responded to concerns about user privacy and data usage with the introduction of its ChatGPT. Yulie Kwon Kim, VP of product management for Workspace Platform, assures that generative AI does not compromise privacy and that Google does not sell Workspace user data. Users have control over their data, and it is not used without permission. Google also states that Workspace data is not used to train AI outside of Workspace. Google's Duet AI tool stores content alongside Workspace content but does not share it externally, providing enterprise-grade security. Overall, Google prioritizes privacy in AI tools within Workspace. Hashtags: #chatGPT #PrivacyFirstAI #SecureDigitalAssistants
Google's AI Tools Prioritize User Privacy to Enhance Security #PrivacyMatters
webappia.com
To view or add a comment, sign in
-
Google's AI email experiment raises privacy concerns - Inc. #privacyconcerns 🤝 Follow us on Discord 🔜: https://lnkd.in/gt823Zd3 🤝 Follow us on Whatsapp 🔜 https://wapia.in/wabeta _ ❇️ Summary: Google's AI email experiment, called Smart Compose, aims to make email composition faster and more efficient by predicting what users will type next. While this feature may seem convenient, concerns about privacy and data security arise. Users may be uncomfortable with Google's AI reading and analyzing their emails to make suggestions. This raises questions about the potential invasion of privacy and how Google will handle sensitive information. Despite the benefits of Smart Compose, users must consider the trade-offs in terms of privacy. Hashtags: #chatGPT #GoogleAIprivacyconcerns #emailexperimentsecurity
Google's AI email experiment raises privacy concerns – Inc. #privacyconcerns
https://webappia.com
To view or add a comment, sign in
-
Cultural Scientist | PR and Communications - love dealing with #innovation #sciencecommunication #sustainability #socialimpact #innovationmanagement #innovationresearch
🤖 »Worst case, you’ll have to find another job.« Data privacy is always a tricky subject since it's difficult to tell the average consumer why protecting your data (to a certain extent) is worth it. With big companies like Zoom apparently reviving/increasing their data collection efforts to sell data to AI model trainers or to train their own AI models things become more tricky. Do the benefits outweigh the risks? Obviously, nobody should trust big private profit-driven companies to do "the right thing" and make ethically sound decisions but once they are omnipresent (talk about a non-competitive market), you might not be able to make the choice of avoiding them. Hans-Böckler-Stiftung https://lnkd.in/eSsrghFj #dataprivacy #aimodels #artificalintelligence #dataanalytics
Zoom Is Using You to Train AI. So Will Everyone Else
https://www.rollingstone.com
To view or add a comment, sign in
-
With Slack announcing that it is training AI on your messages, files, and DMs, requiring you to opt-out, consider this your friendly reminder/warning: between AI training and third-party adtech on your workplace project management utilities, whatever they might be, both of which are scanning all your client confidential matters and NDA'd discussions which you assume are completely private and internal, there might be a lot of lawsuits against you waiting to happen. Check your account privacy settings, read the fine print of the privacy policies and legals, and consider an alternative: which, at this rate, might be a very nice Moleskine. https://lnkd.in/ekAXp7WY
Privacy principles: search, learning and artificial intelligence | Legal
slack.com
To view or add a comment, sign in
-
Keepabl's SaaS is Privacy-in-a-box for busy professionals operationalising governance at their organisation, see how at keepabl.com
Great post by 🐝 Heather Burns on Slack's use of Customer Data in AI training. Such use is becoming widespread. Trust is key and has many factors: - do you understand what your supplier is actually doing? - do you understand the impact on personal, confidential and other data, eg the impact on your NDA obligations on disclosure and likelihood of leakage? - your potential liability under confidentiality agreements, Privacy laws, Customer risk and elsewhere, which all interact with - your organisation's risk appetite
With Slack announcing that it is training AI on your messages, files, and DMs, requiring you to opt-out, consider this your friendly reminder/warning: between AI training and third-party adtech on your workplace project management utilities, whatever they might be, both of which are scanning all your client confidential matters and NDA'd discussions which you assume are completely private and internal, there might be a lot of lawsuits against you waiting to happen. Check your account privacy settings, read the fine print of the privacy policies and legals, and consider an alternative: which, at this rate, might be a very nice Moleskine. https://lnkd.in/ekAXp7WY
Privacy principles: search, learning and artificial intelligence | Legal
slack.com
To view or add a comment, sign in
-
Zoom’s latest terms of service adds a new clause about users’ personal information. Although Zoom has emphasized users can choose not to share data with Zoom while using this application, this provision has been criticized by many users as evidence that Zoom will use users’ private data to train its #AI system. It also raises concerns about users’ inability to assure themselves that their personal #data is not being misappropriated. #privacy https://lnkd.in/erRg3i9x
Concerns about Zoom’s New Terms of Service
https://intelliwings.com
To view or add a comment, sign in
-
#ai | #artificialintelligence | #privacy : Slack may have been using users' chats to train its AI models. Slack faces scrutiny for using customer data without permission, sparking outrage.Corey Quinn highlighted the issue. Users frustrated by lack of transparency and opt-out process. Inconsistencies in privacy policies and premium generative AI tools raise concerns about user control. Current opt out approach may require policy changes to ensure greater transparency and user data control. Read more at: https://lnkd.in/gNTGDVAG
Slack may have been using users' chats to train its AI models - ET CISO
ciso.economictimes.indiatimes.com
To view or add a comment, sign in
-
🚫👁️🗨️ Zoom AI Training: Addressing Privacy Concerns and Providing Mitigations 🕵️♂️📊 Zoom’s use of user data for AI model training has raised concerns over privacy and data usage. To ensure transparency and protect user information, consider these suggestions and mitigations: 1. ⚙️ Transparent Communication: Zoom should communicate clearly and openly with users about its AI training practices, providing insights into data usage and explaining the benefits. 2. 🔒 Opt-Out Option: Offer users the ability to opt-out of AI model training, respecting their privacy preferences and giving them control over their data. 3. 🔄 Data Anonymization: Implement robust data anonymization techniques to protect user identities and ensure that sensitive information remains secure during AI training. 4. 👥 User Consent: Obtain explicit consent from users before utilizing their data for AI training purposes, ensuring compliance with privacy regulations. 5. 🛡️ Data Protection Measures: Strengthen data protection measures to safeguard user information, encrypting data both in transit and at rest. By implementing these suggestions and mitigations, Zoom can bolster user trust, maintain data privacy, and ensure responsible AI model training practices. 📲🔒 #ZoomAI #PrivacyConcerns #UserData #AIModelTraining #DataUsage #DataPrivacy #OptOutOption #Transparency #UserConsent #DataProtection #CyberSecurity #UserPrivacyRights #AIIntegration #DataSecurity #OnlinePrivacy #DataAnonymization #DataProtectionMeasures #ProtectUserPrivacy #CyberAwareness #ResponsibleAI #DataSafeguards #UserTrust #PrivacyMitigations #SecureDataUsage #RespectPrivacyPreferences #DataEncryption #UserDataPrivacy Zoom trains its AI model with some user data, without giving them an opt-out option
Zoom trains its AI model with some user data, without giving them an opt-out option
https://securityaffairs.com
To view or add a comment, sign in
-
Zoom has faced criticism for its terms and conditions change regarding training AI models. Even after clarifying and obtaining additional consent from users, Zoom still uses their data to train its models. I wonder how this affects the privacy rights of users and what are the implications for data protection laws. I would love to hear from experts in #privacy on this issue. I guess my bottom line is that in the era of AI #data is not the new oil because it is is much more valuable! #privacy #ai #zoom #dataprotection
How Zoom’s terms of service and practices apply to AI features | Zoom Blog
blog.zoom.us
To view or add a comment, sign in