๐ข CAIDP Provides Comments to Australian Government on Online Safety Act and AI ๐ฆ๐บ In comments to the Australian Government, the Center for AI and Digital Policy wrote, "the online risks which accompany the advent of generative AI are extensive, and include threats to personal privacy, intellectual property, and life-altering outcomes based on AI-enabled decision-making." CAIDP thanked the Australian government for the opportunity to provide public comments on proposed changes to the Online Safety Act and made several specific recommendations concerning AI: 1๏ธโฃ Establish redlines for developers, providers, and deployers of AI systems regarding training data, prohibiting practices which contravene the Australian Privacy Principles, including web-scraping of personal data and intellectual property. 2๏ธโฃ Require transparent and contestable data provenance for AI models trained on web-scraped data so that data subjects may be made aware when their personal, private data and intellectual property has been used to train AI models, providing an opportunity for compensation and extrication of data. 3๏ธโฃ Require rigorous, independent impact assessments prior to deployment to identify and mitigate potential online harms, including biases and rights violations, with ongoing re-assessments across the AI lifecycle. 4๏ธโฃ Require algorithmic transparency for AI systems so that users are aware when they are interacting with an AI/algorithmic system and are provided with clear and valid reasons for outcomes affecting their lives. 5๏ธโฃ Require human oversight and control over AI systems operating online and an affirmative obligation to terminate if human control of the system is no longer possible and/or if the system fails to uphold human/civil rights in keeping with the Universal Guidelines for AI, a precursor to the Australia-endorsed UNESCO Recommendation on the Ethics of Artificial Intelligence. Merve Hickok Marc Rotenberg Caroline Friedman Levy Nayyara Rahman Lyantoniette Chua Center for AI and Digital Policy Europe #australia #onlinesafetyact #aigovernance #webscraping #dataprotection #intellectualproperty #impactassessments
Center for AI and Digital Policyโs Post
More Relevant Posts
-
On May 21, 2024, the Council endorsed the Artificial Intelligence Act (AI Act), marking a significant milestone as the first set of worldwide rules on AI. The AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while boosting innovation and establishing Europe as a leader in the field. This regulation applies to both public and private AI systems within the EU and is aimed at all types of AI providers. ๐ก๐ฒ๐ ๐ ๐ฆ๐๐ฒ๐ฝ๐: โถ Publication in the EUโs Official Journal. โถ The Act will enter into force 20 days after publication and be fully applicable 24 months later, with specific timelines for certain provisions. ๐๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐ฅ๐ถ๐๐ธ-๐๐ฎ๐๐ฒ๐ฑ ๐ฅ๐ฒ๐ด๐ถ๐บ๐ฒ๐ ๐ผ๐ณ ๐๐ ๐ฆ๐๐๐๐ฒ๐บ๐ AI systems are categorized by risk level: โถ Minimal Risk: Common AI systems, like spam filters and recommender systems, with no additional obligations. โถ Transparency Risk: General-purpose AI must meet transparency requirements, including compliance with EU copyright law. โถ Systemic Risk: Providers must perform model evaluations, mitigate systemic risks, and ensure cybersecurity protection. โถ High Risk: Requires conformity assessment and post-market monitoring, including public registration and transparency measures. โถ Unacceptable Risk: Prohibited due to threats to citizens' rights. ๐๐ ๐ฒ๐บ๐ฝ๐๐ถ๐ผ๐ป๐ ๐ฎ๐ป๐ฑ ๐ฆ๐๐ฝ๐ฝ๐ผ๐ฟ๐ ๐ณ๐ผ๐ฟ ๐๐ป๐ป๐ผ๐๐ฎ๐๐ถ๐ผ๐ป โถ Fully exempted: AI for research, development, prototyping, and military, defence, or national security purposes. โถ Providers of free and open-source models are exempt unless they pose systemic risks. โถ Regulatory sandboxes will be established to support SMEs and start-ups. ๐ฃ๐ฟ๐ผ๐ฐ๐ฒ๐ฑ๐๐ฟ๐ฒ๐ ๐ฎ๐ป๐ฑ ๐๐ถ๐ป๐ฒ๐ Each Member State will designate a national authority to supervise the AI Act. A new European AI Office will oversee general-purpose AI models. Breaches of the AI Act will result in fines based on the severity and type of infringement, with specific thresholds for different categories. Curious to learn more about these cases and their implications? Read the latest blog post written by Kim Lucassen, Nina Orliฤ, Kirill Ryabtsev, Stรฉphanie De Smedt, Emilia Fronczak, Gilles Pitschen, Martijn Schoonewille, Yannick Geryszewski, Marc Ph.M. Wiggers, Ph.D., and Marco de Vries for valuable insights on this topic. Read more: https://lawand.tax/3yMJ4ax #artificialintelligence #innovation #europe #AIproviders #AIact #lawandtax
To view or add a comment, sign in
-
-
Great that the Council endorsed the Artificial Intelligence Act, but I wonder how relevant the framework will be when enforced in 24 months (assuming no EU country will be late on that), knowing that AI is making giant leaps every month. #aiact #ai #artificialintelligence
On May 21, 2024, the Council endorsed the Artificial Intelligence Act (AI Act), marking a significant milestone as the first set of worldwide rules on AI. The AI Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI while boosting innovation and establishing Europe as a leader in the field. This regulation applies to both public and private AI systems within the EU and is aimed at all types of AI providers. ๐ก๐ฒ๐ ๐ ๐ฆ๐๐ฒ๐ฝ๐: โถ Publication in the EUโs Official Journal. โถ The Act will enter into force 20 days after publication and be fully applicable 24 months later, with specific timelines for certain provisions. ๐๐น๐ฎ๐๐๐ถ๐ณ๐ถ๐ฐ๐ฎ๐๐ถ๐ผ๐ป ๐ฎ๐ป๐ฑ ๐ฅ๐ถ๐๐ธ-๐๐ฎ๐๐ฒ๐ฑ ๐ฅ๐ฒ๐ด๐ถ๐บ๐ฒ๐ ๐ผ๐ณ ๐๐ ๐ฆ๐๐๐๐ฒ๐บ๐ AI systems are categorized by risk level: โถ Minimal Risk: Common AI systems, like spam filters and recommender systems, with no additional obligations. โถ Transparency Risk: General-purpose AI must meet transparency requirements, including compliance with EU copyright law. โถ Systemic Risk: Providers must perform model evaluations, mitigate systemic risks, and ensure cybersecurity protection. โถ High Risk: Requires conformity assessment and post-market monitoring, including public registration and transparency measures. โถ Unacceptable Risk: Prohibited due to threats to citizens' rights. ๐๐ ๐ฒ๐บ๐ฝ๐๐ถ๐ผ๐ป๐ ๐ฎ๐ป๐ฑ ๐ฆ๐๐ฝ๐ฝ๐ผ๐ฟ๐ ๐ณ๐ผ๐ฟ ๐๐ป๐ป๐ผ๐๐ฎ๐๐ถ๐ผ๐ป โถ Fully exempted: AI for research, development, prototyping, and military, defence, or national security purposes. โถ Providers of free and open-source models are exempt unless they pose systemic risks. โถ Regulatory sandboxes will be established to support SMEs and start-ups. ๐ฃ๐ฟ๐ผ๐ฐ๐ฒ๐ฑ๐๐ฟ๐ฒ๐ ๐ฎ๐ป๐ฑ ๐๐ถ๐ป๐ฒ๐ Each Member State will designate a national authority to supervise the AI Act. A new European AI Office will oversee general-purpose AI models. Breaches of the AI Act will result in fines based on the severity and type of infringement, with specific thresholds for different categories. Curious to learn more about these cases and their implications? Read the latest blog post written by Kim Lucassen, Nina Orliฤ, Kirill Ryabtsev, Stรฉphanie De Smedt, Emilia Fronczak, Gilles Pitschen, Martijn Schoonewille, Yannick Geryszewski, Marc Ph.M. Wiggers, Ph.D., and Marco de Vries for valuable insights on this topic. Read more: https://lawand.tax/3yMJ4ax #artificialintelligence #innovation #europe #AIproviders #AIact #lawandtax
To view or add a comment, sign in
-
-
โก There is a strong attention of the Italian Data Protection Authority about Artificial Intelligence. โ Following the well known temporary ban on processing imposed on #OpenAI by the Authority on 30 March of last year, and based on the outcome of its fact-finding activity, on January 2024 the Garante notified breaches of privacy law to OpenAI based on the collected evidences. โ Also, on March 2024 the Garante Privacy opened an investigation into OpenAI's "Sora" (the new AI service able to create dynamic, realistic and imaginative scenes from short text instructions), asking OpenAI to provide information on the #algorithm that creates short videos from text instructions. โ Finally, this week the Garante wrote to Italian Parliament and Government by highlighting it possesses the necessary competence and independence to implement the #AIAct, in line with the objective of ensuring a high level of protection on fundamental rights. ๐ข "Given its impact on peopleโs rights, AI should fall within the jurisdiction of Authorities with stringent independence requirements, such as #Privacy Authorities, also due to the close interrelation between artificial intelligence and data protection and to the expertise already acquired with regard to automated decision-making.", the Garante wrote in its own communication to the Italian Parliament and Government. ๐ In the end, it is evident that on the one hand the Garante is stressing (as well as investigating) the synergy between #AI and #DataProtection, and on the other hand it is pushing for their application by a single independent Authority. We'll see how it turns out: stay tuned!
To view or add a comment, sign in
-
-
Entrepreneur. CEO. Digital Visionary. Cloud Enthusiast. Customer Obsessed. Operationally Excellent. Billion Dollar P&L Leader. Bias for Action. Passionate about Diversity, Inclusion and Equity. Love life.
Yesterday the White House signed a sweeping AI directive.......with the following key excerpt....."Blikshteyn specifically pointed to the steps the executive order takes to address privacy issues, which are among the most pressing, given the vast quantities of data that are needed to train effective AI models. The executive order directs the government to prioritize federal support and research for accelerating the development and use of privacy-preserving techniques that allow models to be trained while still preserving individuals' data privacy.Jodi Daniel, who served as the head of health information technology policy for HHS for a decade, highlighted the order's language on privacy-preserving technologies and pointed out the large quantity of data required in developing AI technology."Developing technology that will enable the use of that data while mitigating risk to the privacy of the individual whose data might be used in that development โ I think it's really important," said Daniel, who is now a partner at Crowell & Moring LLP." 360ofme can be part of this solution.......as we deploy our Privacy and Consent Management solution ensuring enterprises can have access to consent based LLM for use in AI, all while preserving the privacy of the individual. Ours is the perfect solution that Jodi Daniel referenced in this excellent piece that describes what was signed in the AI directive yesterday. #ethicaldatause #dataprivacylaw #dataprivacy #llm #ai
To view or add a comment, sign in
-
The White House's executive order ๐ on AI is out, with the fact sheet available for viewing and the full 111-page EO coming any minute today. ๐ฆ Here are some highlights, with the caveat that an EO like this on its own, much like the White House's blueprint for an AI Bill of Rights last year, carries more influence than actual impact: โThe National Institute of Standards & Technology (NIST) will be responsible for setting standards for "extensive red-team testing" new AI systems โAn intense focus on national security, with government reporting requirements when creators test systems and standards to prevent AI from engineering dangerous biological materials โThe institution of best practices for detecting AI-generated content and authenticating official content (which will affect Google searches & SEO immensely) โProtect Americans' privacy and strengthen privacy-preserving research 7 technology, including stronger privacy guidelines for government agencies to account for AI risk โAbove all else, catalyze *safe* AI research across the country, with particular references to using AI to enhance the healthcare and education sectors This executive order is just the beginning, with it calling for the need to develop a host of new standards and guidelines for assessing AI risk, but every process has to start somewhere ๐ โ๏ธ Check out our take on if Congress will be up to the challenge of passing meaningful AI legislation here: https://lnkd.in/d95QvpgD ๐ And see the EO's fact sheet here: https://lnkd.in/engu8BUv #AI #thewhitehouse #executiveorder #dataprivacy
To view or add a comment, sign in
-
Artificial intelligence is vampirizing our personal data ๐งโ๏ธ And we probably haven't even realized to what extent AI is everywhere: ๐ GenAI that require huge amount of data to train - personal or notโฆ ๐ smartwatches, ๐ voice assistants, ๐ social networks... And they greedily suck up our personal data, often without our knowledge ๐ก For example, Automattic, the company behind Tumblr and WordPress, is selling user data from blog posts, comments, articles, etc. to OpenAI and Midjourney. โก๏ธ The crux of the problem? There is a total lack of transparency about the various AI players who have access to our data: we might have agreed to setting up an account to create a blog, but not for our personal data to be used otherwise. ๐จ Spoiler alert: GDPR does not dissolve in AI! Freely given, informed and specific consent is required - probably more than ever. Quite the opposite: the brand new AI Act is based on Article 16 of the Treaty on the Functioning of the EU, which is the cornerstone of the protection of personal data in Europe. ๐ the AI Act prohibits โhigh risk AI systemsโ such as using biometric data when it creates a โsignificant risk of harm, to the health, safety or fundamental rights of natural persons.โ The sanctions are severe for companies that wonโt follow suit: up to 7% of annual, global turnover for non-compliance. But is that enough to ensure that we donโt end up being the products in a GenAI world? What do you think of this stance taken by the EU? ๐ซ Regain your free will online #DataPrivacy #GDPR #AIAct #EUInitiative #DataProtection #Transparency #DigitalPrivacy #EURegulation #EthicalAI #DataSecurity
To view or add a comment, sign in
-
-
Director Consulting Expert | Certified architect | Technologist | XR/VR-evangelist | Innovation, Data, Ai & Information security professional
Its finally here. :) EU's New "Artificial Intelligence Act". ๐จโ๏ธ A legislation that marks a significant step towards responsible AI usage. This legislation attempts to balance the dynamism of innovation with the needs for security and privacy. By implementing rules that protect fundamental rights and promote transparency, it strengthens trust in AI technologies and ensures responsible usage. As an expert in Architecture, AI, information security, and data management, I see several positive aspects: - Protection of Fundamental Rights By limiting the use of AI for biometric identification and surveillance, the law addresses crucial privacy concerns. - Framing High-Risk AI Clearly defining and regulating high-risk AI systems is crucial to prevent potential harm. - Support for Innovation and SMEs The law encourages the development and testing of new AI technologies, which is vital for continued growth in the AI sector. However, there are challenges. The risk of overregulation and implementation challenges require careful monitoring. it's important for the law to remain technologically neutral and flexible enough to adapt to AI's rapid development. In summary, the "Artificial Intelligence Act" is a necessary step to ensure that AI technology develops in a way that respects our fundamental values and societal norms in the EU while promoting innovation and growth. Your thoughts and insights on this are very welcome! Please leave a comment! ๐ #ai #aiAct #eu
Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI | Nyheter | Europaparlamentet
europarl.europa.eu
To view or add a comment, sign in
-
๐ช๐บ EU AI Act: Implementation Timeline The EU AI Act was published in the Official Journal of the EU on 12 July 2024. Swipe through to see key dates and requirements for the EU AI Act implementation. Are you prepared? ๐ 1 August 2024: Entry into Force โข Act officially becomes law โข 20 days after publication โข Marks the beginning of the transition period ๐ก๏ธ 2 February 2025: Prohibitions Effective Ban on unacceptable risk AI practices takes effect, includes: โข Subliminal manipulation โข Exploitation of vulnerabilities โข Social scoring by public authorities โข Real-time biometric identification in public spaces (with exceptions) ๐ผ 2 August 2025: GPAI Obligations Begin Obligations for general purpose AI (GPAI) model providers start. Requirements include: โข Model documentation โข Risk mitigation measures โข Incident reporting โข Compliance with EU copyright law ๐ 2 February 2026: High-Risk AI Guidance European Commission to issue guidance on: โข Practical implementation of high-risk AI requirements โข List of practical examples of high-risk and not high-risk use cases โข Clarification on application of the AI system definition ๐ 2 August 2026: Full Application โข Complete enforcement of the AI Act begins โข Exceptions for certain AI systems in EU law areas of freedom, security, and justice โข All providers, users, and importers must comply with applicable requirements ๐ 2 August 2027: Commission Report โข European Commission to report on use of delegated powers โข Assessment of need for amendment of the definition of AI systems โข Evaluation of implementation and effectiveness of codes of practice Is your organization ready for the EU AI Act? Let's connect and discuss how these changes might impact your business. Share your thoughts in the comments! ๐ Want more? Please follow me for regular updates on #dataprivacy and #AIGovernance from China, Hong Kong, Singapore and more. #EUAIACT #AI #Privacy #DateProtection #PrivacyPros
To view or add a comment, sign in
-
Founder & CEO | AITransparency, XAI, dark patterns detection and fixing. Age-appropriate design, privacy-enhancing design, litigation design.
Artificial intelligence is vampirizing our personal data ๐งโ๏ธ And we probably haven't even realized to what extent AI is everywhere: ๐ GenAI that require huge amount of data to train - personal or notโฆ ๐ smartwatches, ๐ voice assistants, ๐ social networks... And they greedily suck up our personal data, often without our knowledge ๐ก For example, Automattic, the company behind Tumblr and WordPress, is selling user data from blog posts, comments, articles, etc. to OpenAI and Midjourney. โก๏ธ The crux of the problem? There is a total lack of transparency about the various AI players who have access to our data: we might have agreed to setting up an account to create a blog, but not for our personal data to be used otherwise. ๐จ Spoiler alert: GDPR does not dissolve in AI! Freely given, informed and specific consent is required - probably more than ever. Quite the opposite: the brand new AI Act is based on Article 16 of the Treaty on the Functioning of the EU, which is the cornerstone of the protection of personal data in Europe. ๐ the AI Act prohibits โhigh risk AI systemsโ such as using biometric data when it creates a โsignificant risk of harm, to the health, safety or fundamental rights of natural persons.โ The sanctions are severe for companies that wonโt follow suit: up to 7% of annual, global turnover for non-compliance. But is that enough to ensure that we donโt end up being the products in a GenAI world? What do you think of this stance taken by the EU? ๐ซ Regain your free will online #DataPrivacy #GDPR #AIAct #EUInitiative #DataProtection #Transparency #DigitalPrivacy #EURegulation #EthicalAI #DataSecurity
To view or add a comment, sign in
-
-
Artificial intelligence is vampirizing our personal data ๐งโ๏ธ And we probably haven't even realized to what extent AI is everywhere: ๐ GenAI that require huge amount of data to train - personal or notโฆ ๐ smartwatches, ๐ voice assistants, ๐ social networks... And they greedily suck up our personal data, often without our knowledge ๐ก For example, Automattic, the company behind Tumblr and WordPress, is selling user data from blog posts, comments, articles, etc. to OpenAI and Midjourney. โก๏ธ The crux of the problem? There is a total lack of transparency about the various AI players who have access to our data: we might have agreed to setting up an account to create a blog, but not for our personal data to be used otherwise. ๐จ Spoiler alert: GDPR does not dissolve in AI! Freely given, informed and specific consent is required - probably more than ever. Quite the opposite: the brand new AI Act is based on Article 16 of the Treaty on the Functioning of the EU, which is the cornerstone of the protection of personal data in Europe. ๐ the AI Act prohibits โhigh risk AI systemsโ such as using biometric data when it creates a โsignificant risk of harm, to the health, safety or fundamental rights of natural persons.โ The sanctions are severe for companies that wonโt follow suit: up to 7% of annual, global turnover for non-compliance. But is that enough to ensure that we donโt end up being the products in a GenAI world? What do you think of this stance taken by the EU? ๐ซ Regain your freewill online #DataPrivacy #GDPR #AIAct #EUInitiative #DataProtection #Transparency #DigitalPrivacy #EURegulation #EthicalAI #DataSecurity
To view or add a comment, sign in