Samir Desai’s Post

View profile for Samir Desai, graphic

Strategic Finance, Corporate Development & Investments @HRT | Prev: @Unit, @Chime, @Sift, @JPMorgan | Duke '13

Many worry that AI will lead to dystopia & our only hope is to slow its development. I disagree. While AI has risks, it democratizes access to computing. We need to enable it responsibly rather than hold it back. Over-engineered post: https://lnkd.in/gjETEV_b 🧵 👇 What's so exciting about AI is that it makes computing accessible to all. English the hottest new programming language. DALLE helps the technical get creative & Replit lets the creative get technical. And the biggest AI firms are not FAANG but startups. Arguments against AI as a new tech phenomenon overlook its long development history since 1956. Models like Deep Blue and AlphaGo have existed for years, challenging the perception of AI as a new frontier and emphasizing the importance of ethics and governance. AI isn't new so fears of job commoditization may be overblown. Concerns about AI often come from knowledge workers who see jobs at risk (for once). We should remind ourselves that we live in a world with more people, more jobs & lower unemployment than at any previous time in history. Temporary displacement is likely, but new opps will arise. That said, to prepare workers for AI's impact, we must prioritize retraining and upskilling. Govts and companies need to be aligned that these metrics matter. Meanwhile, regulatory efforts around AI are already far better than for social media or crypto. Already, AI has its first landmark legislation in the EU and the US is not far behind. And for the first time, the technology we need to regulate can be used to regulate it. At the same time, tech giants like Microsoft have embraced AI regulation from the start. Red-teaming & AI ethics abound. Google & OpenAI label their AIs "experimental," noting propensity for errors & lying, leading to a more skeptical view of AI info than early Wikipedia. Of course, central to the AI debate is the fear of harm. But these fears arise from misunderstanding AI's breadth & overestimating short-term impacts. Achieving AGI is a long way off, but some believe it's already too late. We tend to anthropomorphize AI, assuming it'll act based on human-like motives. While some worry AI might mimic our worst traits like racism or malfeasance, it's more likely a flaw in our AI training than a step towards a dystopian future. The real threat of AI isn't machines going rogue, it's people going rogue with AI. Humans pose the greatest danger when wielding AI tech for evil. But the solution is not to block, but to learn from mistakes. We can regulate AI, monitor outputs, penalize plagiarism, and punish incorrect AI use while supporting AI research. By far the most prevailing myth that needs dismantling is the binary standoff we imagine between humanity and AI. It's not us vs. them. We can bridge atoms and bits to enhance our capabilities. Ultimately, the future of AI needs regulation & ethical development but it needs to move forward.

Nikki Varanasi

CEO & Founder | Fintech | Techstars | Ex-McKinsey

5mo

Agreed 👏

Like
Reply

To view or add a comment, sign in

Explore topics