I'm usually skeptical about AI safety folks crying existential risk, but I find this document, with credible authors inside and outside the AI community, refreshingly measured.
We have PLENTY of laws regulating human behavior. Why not just apply those laws to the owner of an AI system? Why do we need laws just for AI?
No risk, no reward. According to IBM's CEO survey, 72% of Company executives said they will forego Generative AI opportunities due to emerging legislation. For those who are pessimistic or doubtful against Gen AI, U.S. is probably safest "Corporate Torts" jurisdiction on the topic of Trustworthy AI litigation and/or abuses. Quote "Executives understand what’s at stake: 58% believe that major ethical risks abound with the adoption of generative AI, which would be very difficult to manage without new, or at least more mature, governance structures. Yet, many are struggling to turn principles into practice. While 79% of executives say AI ethics is important to their enterprise-wide AI approach, less than 25% have operationalized common principles of AI ethics" end-quote. Ironically, the smaller, more agile startups are likely to comply with emerging law than the larger behemoths who have already earned consumer loyalty and have the cash reserves to weather litigation and brand identity/market-share to weather reputational risk.
High-Class Consultant
9moThis is the actual policy document: https://managing-ai-risks.com/