Chatbots spewing hate speech is a problem for engineers and security teams alike. Building safe and reliable AI systems is a problem that runs deep! Check out my latest blog to learn about how we got here and what you can do to deploy AI that you can trust. I’ve even drawn up some illustrations to keep you entertained on the way:
Maggie Basta’s Post
More Relevant Posts
-
Generative AI poses a unique set of security problems. Compared to "traditional ML" with its more constrained outcomes, the outcomes of genAI could theoretically be anything. So, what counts as "good" and how do you measure it? How do you deal with the sheer volume of data exposure? And when you put your trust in 3rd party models, how do you protect the pipeline? Maggie Basta tackles all of these questions and more—plus two market maps and some original drawings—in her newest on the Scale blog. https://lnkd.in/gBEZV5_N
Building trustworthy AI systems: AI Reliability and Security are everyone’s problem - Scale Venture Partners
https://www.scalevp.com
To view or add a comment, sign in
-
Say hello to Flank 👋 Over the past 18 months, we’ve seen incredible organic growth within organizations. Expert teams outside of legal (think compliance, infosec, finance) are practically pulling our product out of our hands. Now, we’re doubling down. Flank is an infinitely scalable AI colleague that completely removes blockers for commercial teams. We’ve found that these teams place tremendous value on the information they receive from AI being: 1. To the point 2. Actionable This is fundamentally at odds with the typical RAG-style answers everyone’s become used to. If you asked a colleague a question and they gave you quotes from everything that seemed “semantically relevant” (typical vector search), you’d probably block them on Slack. Our experience in serving hundreds of thousands of queries has given us unprecedented insight into what organizations actually need from AI. I’m very grateful to be working with a team that’s using that knowledge to build a product that’s clearly peerless.
To view or add a comment, sign in
-
Ideal case for market regulation is when business interests push tech companies to build higher integrity AI systems and ensure that data is acquired ethically and legally. This is one of the ways it plays out.
Cloudflare is offering to block crawlers scraping information for AI bots.
theverge.com
To view or add a comment, sign in
-
I'm seeing a lot of developers build internal AI agents directly on top of LLMs. For most use cases, that's a big mistake. Reason #1 to use Botpress on top of LLMs is our token caching. If you build an internal AI agent directly on top of an LLM, you're paying for every single information request. But customers tend to ask repetitive questions. It's dumb to pay for every single one. For chatbots built on our platform, 27% of all LLM requests are cached. If your chatbot knows the answer, it'll do it at no AI cost. Caching has saved our clients billions in tokens over the last couple of months. Anecdotally, some of our clients have seen their caching rate hit 75%. People love to say they're saving money by building directly on an LLM. But this is just one way that's not true.
To view or add a comment, sign in
-
Are you currently using Github #Copilot in your workflow? If so, we invite you to explore our comprehensive Generative AI Vendor Tool Risk Profiles to see if GitHub has proactively addressed the most critical #risks associated with generative AI, including issues such as #hallucinations and sensitive data leakage. 👉 Swipe through to learn everything you need about GitHub's risk assessment, mitigation strategies, certifications, and Credo AI's expert recommendations. 📅 Credo AI's analysis applies to the GitHub Copilot tool version available as of June 25, 2023. 📣 Read the complete profile here: https://lnkd.in/dStYQi3J
To view or add a comment, sign in
-
Long before LLMs were the new hotness, SignalFire made a huge bet that a data-driven platform could power a new approach for venture. It's the reason I joined the team, because using data to make smarter decisions has been my not-so-secret weapon as a marketer. But after 10 years of building we're not stopping anytime soon, and the team has been hard at work incorporating LLMs into Beacon AI, our homegrown data platform. Read the blog to learn how we're approaching it. https://lnkd.in/gaM7R28s
VC GPT: How LLMs are strengthening SignalFire’s in-house AI | SignalFire
https://signalfire.com
To view or add a comment, sign in
-
Generative AI is going to require work to be "enterprise ready". Primary to this is, as the author highlights, is security, privacy, scalability, and trust. From the article: "The only way RAG — and enterprise AI — work is if you can trust the data. To achieve this, teams need a scalable, automated way to ensure reliability of data, as well as an enterprise-grade way to identify root cause and resolve issues quickly — before they impact the LLMs they service." A semantic graph can jumpstart these initiatives, with the added bonus of being a value-add in a variety of other areas of the business. If your organization is considering how to leverage GenAI, get your data right first. We can help - just send us a message for a quick chat!
The Moat for Enterprise AI is RAG + Fine Tuning — Here’s Why
towardsdatascience.com
To view or add a comment, sign in
-
GenAI has the potential to revolutionize all industries, including financial services. Despite being a technology-led business, financial service players have been navigating AI. My BCG colleague Neil Pardasani met with Fortune to offer his insights about the field’s use of the new technology. He explained that the industry has been taking longer to ensure they “get it right.” Neil highlighted that a balanced approach to innovation and security is vital for financial service players. Neil’s thoughtful contribution to this article displays the strength of BCG insights in this ever-evolving field. Read the full article!
This bank tested 90 uses for AI before choosing the top 2—and they benefit customer service and productivity
bcg.smh.re
To view or add a comment, sign in
-
AI is maturing rapidly, and we are figuring out new ways of using it to enhance the work we do – from recruiting top talent to discovering new drug molecules and more. In financial services, for instance, AI and GenAI tools are already transforming services, offerings, workflows and so much more. But the pace of technological innovation in finance must be matched in kind with the appropriate level of ethical and responsible governance to ensure robust protections are in place for customers and their data. My colleague Neil Pardasani recently sat down with Fortune to discuss why the ‘slow and steady’ route is most sensible when it comes to institutions’ use of these powerful new tools. Why is that, exactly? I encourage you to read the full article to learn more from his discussion: https://lnkd.in/gGeVTDSg
This bank tested 90 uses for AI before choosing the top 2—and they benefit customer service and productivity
fortune.com
To view or add a comment, sign in
-
LLM Ops for Agents? Yes, please. Meet AgentOps.ai Join us as we dive into the “Ops” of agents, tooling built to accelerate how we put complex LLM prototypes into production. We’ll build, evaluate, and monitor some agents live to get a feel for: 🚀 How AgentOps can accelerate LLM application development. ⏪ The utility of session replays and analytics. ⛓️ How AgentOps integrates with LangChain! We’ll also chat live during Q&A with AgentOps Co-Founder & CEO Alex Reibman. RSVP: https://lnkd.in/gBxS7E5Q #LLMOps #OpenSource #AI
To view or add a comment, sign in
-