Agents Vs RPA: With the advent of new AI Agentic architectures, will agents replace RPA? Or are these agents fundamentally different? RPA is good for repetitive high volume tasks - it requires more deterministic data and process steps - what robots do on factory floors like pick stuff at Amazon warehouses. The thinking and process has already been done i.e. this is how you process a standard insurance claim. The downside is that because it is deterministic, It is likely unable to handle exceptions smartly i.e. how to tackle incorrectly filled insurance forms? If you add a new field to the Insurance form, you have to rebuild the Robot. Agent’s on the other hand are not deterministic - you give it a goal, an agent observes what humans do, learn, and figure out what data and process steps might be needed to achieve the goal. Hence, they are much better at figuring out how to handle exceptions, how to adapt as things change. My take: We are far from AGI. Agents will start with a narrow set of relatively simple tasks tied to specific systems. That’s how we are building at Statisfy. Incrementally, as models and agentic systems improve, we layer on complexity across data, process and decisions. What do you think? Credit to David Luan (Adept)’s for the inspiration behind this post. Listen to the full 20 minute VC podcast with David here. https://lnkd.in/gWsAMiMt
Munish Gandhi’s Post
More Relevant Posts
-
Ready to transform your business? Get your custom Generative AI Plan! 📩 Underwriting processes are becoming more efficient, accurate, and customer-centric. One of the most revolutionary developments in this field is the use of no-code GenAI in underwriting 😃 📩 Get Your Plan https://lnkd.in/dn96_-MX 🚗 Blog Post https://lnkd.in/daH4ZK7q INTELLITHING collaborating with NVIDIA Inception Program This technology is set to redefine the way insurance companies operate and serve their customers, paving the way for a more data-driven and agile future in underwriting. #generativeai #artificialintelligence #nocode #businessgrowth #prediction #trends #llm #chatgpt #llama #nvidia #development #insurance
To view or add a comment, sign in
-
-
Strategic Automation Consultant/Chief Content Evangelist/Speaker/IBM Champion 2020/2021/2022/2023/2024 IBM BA Partner
We may be looking at the radical, even revolutionary change of the information ecosystem as we now know it. #ai #artificalintelligence #leadership #innovation #ChatGPT #futureofwork #GenerativeAI #businessautomation #genai #digitaltransformation #processautomation #ibm IBM #chatbot #startup #marketing #strategy #business #publicsector #technology #metaverse #airegulation #llm #data #ml #machinelearning #customerservice #aigovernance #aitools #aileadership #aiagents OpenAI #promp NVIDIA
NVIDIA GTC 2024: How AI Is Driving Enterprise Transformation
biztechmagazine.com
To view or add a comment, sign in
-
From an individual perspective, being able to piece the puzzle together when you start your coding journey is really significant. Understanding the fundamental principles of #data streaming and its role in #artificialintelligence (#AI) and #automation can be daunting but is crucial for building effective and reliable #AIsystems. From a #datapipelines perspective, data streaming plays a pivotal role in ensuring the seamless flow of information from #datasources to AI systems and other processing units. Data pipelines are structured pathways through which data travels, getting collected, processed, and analysed in real-time or batch mode. Data streaming enhances these pipelines by providing a continuous and instantaneous flow of data, which is critical for applications that demand timely and accurate information processing. #Errorhandling is another critical aspect of #datastreaming. In data pipelines, ensuring the integrity and reliability of data is paramount. #Errors can occur at various stages, from data collection to transmission and processing. Effective error handling mechanisms are necessary to detect, log, and rectify these errors without disrupting the continuous #dataflow. This includes strategies such as #retrymechanisms, #datavalidationchecks, and #alertsystems that notify #operators of issues in real-time. Robust error handling ensures that the data pipeline remains resilient and maintains high data quality, which is essential for accurate AI and automation outputs. We talked about this recently when we discussed #turbocodes. Turbo codes are an advanced error correction technique used to enhance the reliability of #datatransmission in data streaming and data pipelines. They employ iterative #decoding and redundancy to detect and #correcterrors in the transmitted data, significantly improving #dataintegrity and reducing the likelihood of data corruption. By incorporating turbo codes into data streaming processes, data pipelines can achieve higher reliability and accuracy, which is crucial for applications that rely on precise and real-time data, such as autonomous vehicles and financial trading systems. These are fundamental principles in your journey to comprehending the various steps of building artificial intelligence. Understanding the critical role of data streaming, error handling, and advanced error correction techniques like turbo codes lays the groundwork for developing reliable and efficient AI systems. This knowledge equips you with the tools necessary to ensure that your AI models and automation processes are based on accurate, real-time data, allowing them to make informed decisions and adapt to changing conditions dynamically.
To view or add a comment, sign in
-
Founder // CEO MakeIntellex // Cloud and AI leader // Accelerating adoption, collaboration, and innovation utilizing high level AI and Cloud Architecture
Every Gen AI discussion I've had will come down to these very concepts. Cost, value targets, AI cycle integrations like Autogen, etc.. After a while it becomes needful to break down to what's needed, expected so it can be tested, and how to deliver. Different approaches at times, but still a continuation of the concepts behind exceptional delivery.
#AI presents us with many #dilemmas: situations where there is no one clear answer and where we need to weigh different tradeoffs and, ultimately, make a judgment call about what we value and how we want to proceed. One topic that has been top of mind recently has been the costs associated with using #LLMs in an enterprise setting. With NVIDIA posting record revenues (and reaching a $2 trillion valuation, kudos to our LLM Mesh partner!), how can enterprises ensure that they are not wasting money? First, it’s important not to let a fear of wasting money get in the way of innovation. You need to take some risks, there is a huge first-mover advantage. And you need to be willing to accept some failures along the way. But what is essential is keeping track of where that spending is going. As #GenAI gets used more in the enterprise, some of the peculiarities of its #cost structure will become more apparent. Part of this is per-token pricing; IT budgets historically have not been built on token counts, but this will be relatively easy to manage. What may be more challenging is when one prompt begins to trigger 10, 15, or 20 more prompts in a fully robust, enterprise-grade GenAI project. This is because LLMs are also used for the evaluation and control of both the prompt and the response returned. AI as author and editor, it’s all a bit weird, but companies need to keep track of all of these costs to manage their GenAI deployments properly. Since I’ve been having these discussions internally and externally, I thought I would share them here as well because I’d love to hear what you think and what your experiences have been.
To view or add a comment, sign in
-
So fun to speak at Matt Turck’s AI fireside chats last night with Cristóbal Valenzuela, Cofounder, CEO of Runway and Florian Douetteau, Cofounder, CEO of Dataiku! Couldn’t agree more with Florian’s advice on the benefits of being an early adopter of generative AI. The cost of inaction and stagnation is FAR more (100x - bankruptcies) than cost of AI experimentation and pilots. I saw it first hand with retailers who put off going online in the 2000s - many who failed to innovate filed for bankruptcy, wiping out great companies and jobs. Those who survived the holdout then had to spend hundreds of millions playing catch up in 2010-2015, lost billions of dollars of market share. The retailers who invested $100k early in going online created so much value to consumers and so many jobs - a no brainer in hindsight. Raspberry AI is excited to support the fashion pioneers who see generative AI as the next Internet.
#AI presents us with many #dilemmas: situations where there is no one clear answer and where we need to weigh different tradeoffs and, ultimately, make a judgment call about what we value and how we want to proceed. One topic that has been top of mind recently has been the costs associated with using #LLMs in an enterprise setting. With NVIDIA posting record revenues (and reaching a $2 trillion valuation, kudos to our LLM Mesh partner!), how can enterprises ensure that they are not wasting money? First, it’s important not to let a fear of wasting money get in the way of innovation. You need to take some risks, there is a huge first-mover advantage. And you need to be willing to accept some failures along the way. But what is essential is keeping track of where that spending is going. As #GenAI gets used more in the enterprise, some of the peculiarities of its #cost structure will become more apparent. Part of this is per-token pricing; IT budgets historically have not been built on token counts, but this will be relatively easy to manage. What may be more challenging is when one prompt begins to trigger 10, 15, or 20 more prompts in a fully robust, enterprise-grade GenAI project. This is because LLMs are also used for the evaluation and control of both the prompt and the response returned. AI as author and editor, it’s all a bit weird, but companies need to keep track of all of these costs to manage their GenAI deployments properly. Since I’ve been having these discussions internally and externally, I thought I would share them here as well because I’d love to hear what you think and what your experiences have been.
To view or add a comment, sign in
-
🌐🤖 Transform your business with AI. Discover how Enterprise AI can revolutionize industries by leveraging data and bridging the gap between legacy systems and cutting-edge AI ecosystems through Robotic Process Automation. Dive into our latest insights and learn how to build your AI bridge. Download the eBook to learn more: https://buff.ly/3QFm9UC #EnterpriseAI #Innovation #BusinessTransformation #qBotica #AI #Automation
Re-Imagining the Future of Efficiency: The Road to Intelligent Automation and Enterprise AI - qBotica | Intelligent Automation for your Enterprise | Featured UiPath Platinum Partner
https://qbotica.com
To view or add a comment, sign in
-
We help organizations succeed using AI by building on a foundation of knowledge management, quality data and sound governance.
AI’s “stone age”? (LLMs powering agents that access other tools). This is certainly the next phase of robotic process automation (RPA) technology however use cases will need to be narrow initially due to the inherent uncertainty around LLM behavior. This article in #wired has one interesting eCommerce return example - searching through email for a receipt, filling in a return authorization and arranging package pickup. (I would like a reminder to notify me when the return window closes for my family’s purchases - that alone would save a lot of money…). Many of these types of functions are already possible with API callouts and integrations but require a good deal of manual “plumbing”. Imagine the power (and risk) of having the LLM powered agent build those pieces. I believe that is the trajectory (agents building agents) but there would have to be strict guards rails. The Industrial Revolution enabled tools that could be used to build better tools but humans did all of the creative problem solving and design. AI has been and will continue to build better AI across new domains and multiple dimensions that we cannot yet anticipate. Exciting and scary. https://lnkd.in/eu23WK8V
To view or add a comment, sign in
-
Crafting top digital products for mobility, automotive, and fintech companies. Ingenuity, heart, grit, and determination are my core values.
The more automated and AI-driven we become, the more I find the service needs of clients want a human touch. Read the complaint boards of any company with service offerings from mobile phones to banking to logistics, to travel, you name it. Using technology is a must, remembering that humans are the actual buyers and deserve human care has to remain. We are all happy with chats, emails, and even still make phone calls, but are we happy with talking to robots yet? Will we ever be happy to do it? My guess is how much the AI actual masters empathy will answer this question, and not as a program, but as a cognizant (scary) entity. Technology, whether an app, a webpage, a platform, or a _________, we have to matter as humans to humans! The HUMAN interface has to remain. I can show you how to make tech deliver while enhancing the human factor. Ask me.
To view or add a comment, sign in
-
Are you running AI/ML applications at the edge? Then you should check out Avassa for Edge AI. Because AI applications require a container application and model management solution that provide the (not so) little extra when it comes to lifecycle management. Avassa for Edge AI allows you to efficiently manage edge-specific challenges and perform actions such as: ► Targeted deployments. One application version might require a GPU, another doesn’t. Or the trained model might have a separate lifecycle from the applications that use it. Avassa for Edge AI allows you to schedule applications only to where it makes sense and provides value. ► CI/CD pipeline extension. Where MLOps stop, Avassa for Edge AI begins. Manage the full lifecycle of your containerized applications and trained models with ease. ► Purpose-built data collection. We provide an embedded telemetry bus running at each edge, for simplified yet secure data collection. Combined with #MLOps tooling of your choice, Avassa for Edge AI unleashes the power of #EdgeAI. It’s your gateway to seamless deployment and operation of distributed on-site Edge AI applications and trained models. Learn more here: https://lnkd.in/dqi3-Y52 (Spoiler alert: Double demos included)
Avassa for Edge AI - avassa.io
http://avassa.io
To view or add a comment, sign in
-
So you have an AI app that you want to run at the edge.... how are you going to get it there? And how are you going to update it? Monitor it?
Are you running AI/ML applications at the edge? Then you should check out Avassa for Edge AI. Because AI applications require a container application and model management solution that provide the (not so) little extra when it comes to lifecycle management. Avassa for Edge AI allows you to efficiently manage edge-specific challenges and perform actions such as: ► Targeted deployments. One application version might require a GPU, another doesn’t. Or the trained model might have a separate lifecycle from the applications that use it. Avassa for Edge AI allows you to schedule applications only to where it makes sense and provides value. ► CI/CD pipeline extension. Where MLOps stop, Avassa for Edge AI begins. Manage the full lifecycle of your containerized applications and trained models with ease. ► Purpose-built data collection. We provide an embedded telemetry bus running at each edge, for simplified yet secure data collection. Combined with #MLOps tooling of your choice, Avassa for Edge AI unleashes the power of #EdgeAI. It’s your gateway to seamless deployment and operation of distributed on-site Edge AI applications and trained models. Learn more here: https://lnkd.in/dqi3-Y52 (Spoiler alert: Double demos included)
Avassa for Edge AI - avassa.io
http://avassa.io
To view or add a comment, sign in
Founder @ Stealth | Alum: Confluent, Dropbox, Facebook, IIT Bombay
1moAnother point people often miss with Agents is that LLMs enable Agents to perform knowledge work, such as researching topics and identifying open questions, and much more. This goes beyond traditional RPA, as the work involves deeper semantic understanding rather than just triggering workflows across applications.