Daniel Tunkelang’s Post

View profile for Daniel Tunkelang, graphic

High-Class Consultant

I'm usually skeptical about AI safety folks crying existential risk, but I find this document, with credible authors inside and outside the AI community, refreshingly measured.

AI firms must be held responsible for harm they cause, ‘godfathers’ of technology say

AI firms must be held responsible for harm they cause, ‘godfathers’ of technology say

theguardian.com

This is the actual policy document: https://managing-ai-risks.com/

Like
Reply

We have PLENTY of laws regulating human behavior. Why not just apply those laws to the owner of an AI system? Why do we need laws just for AI?

André Maurice J Bost

Principal Java Developer/Consultant at STAR Java/Scala Engineering LLC

9mo

No risk, no reward. According to IBM's CEO survey, 72% of Company executives said they will forego Generative AI opportunities due to emerging legislation. For those who are pessimistic or doubtful against Gen AI, U.S. is probably safest "Corporate Torts" jurisdiction on the topic of Trustworthy AI litigation and/or abuses. Quote "Executives understand what’s at stake: 58% believe that major ethical risks abound with the adoption of generative AI, which would be very difficult to manage without new, or at least more mature, governance structures. Yet, many are struggling to turn principles into practice. While 79% of executives say AI ethics is important to their enterprise-wide AI approach, less than 25% have operationalized common principles of AI ethics" end-quote. Ironically, the smaller, more agile startups are likely to comply with emerging law than the larger behemoths who have already earned consumer loyalty and have the cash reserves to weather litigation and brand identity/market-share to weather reputational risk.

  • No alternative text description for this image
See more comments

To view or add a comment, sign in

Explore topics