๐ข๐ฝ๐๐ต๐ฎ is a ๐๐๐๐๐๐๐๐๐๐๐๐๐๐ property; ๐๐ ๐ฃ๐ฃ๐๐๐ฅ๐๐๐ค๐ค is an ๐๐๐ค๐ฅ๐๐๐๐-๐๐๐ง๐๐ property. LLMs (and GenAI) learn and sample from a distribution (and can thus capture ๐ข๐ฝ๐๐ต๐ฎ). Databases store and retrieve instances (and can thus ensure ๐๐ ๐ฃ๐ฃ๐๐๐ฅ๐๐๐ค๐ค). Think twice before buying claims that LLMs can self-verify ๐๐ ๐ฃ๐ฃ๐๐๐ฅ๐๐๐ค๐ค (and ensure factuality); or that databases can get ๐ฌ๐ป๐ฎ๐ช๐ฝ๐ฒ๐ฟ๐ฎ..
Professor, I've a request for you, to make a lecture video to burst out the myth about LLM & AGI ๐คฃ
Very good analogy! I dare putting a bit more flavor to it. Style and correctness are not mutually exclusive. Style is a collection of basic invariants found in a set of possible solutions (instances). The more honed in, refined styles eventually increase precision. So ultimately continuosely nudging (fine-tuning/training) LLM to calibrate its styling will lead to better and more precise outcomes :-)
Fascinating insights on the properties of language models and databases! Subbarao Kambhampati
This could be converted into a theorem and propagated similar to CAP theorem for the expected guarantees. Else, good luck in convincing "LLMs sponsors" who want major breakthroughs in all critical fields from drug discovery to financial security!
Well said.
thanks for your relentless efforts in clearing the misconceptions with captivating examples sir !
That's a fascinating insight into the nuances of style and content, you have a deep understanding of the complexities involved. ๐ Subbarao Kambhampati
Interesting insights! Always important to critically evaluate new claims in the rapidly evolving field of AI. Subbarao Kambhampati
Very deep insight relating llms and data
AI Hub/CoE Leader, Cargill | x.GE | x.IBM | IIT Madras Alum
4moThank you Professor for your insights. Through out our history we figured the fallibility & limitations of our brain and hence invented technologies like "Written & printed record logs/books" which we later on digitized using system of records/database systems. Your talks about LLMs not being so good at higher-level thinking and planning also stuck with me. But I'm curious, what if we look at the brain as a kind of 'prediction machine'? I recently watched a lecture by Prof. Andy Clark on this idea of 'predicting minds.' Do you think LLMs are better than human brains in that aspect, especially when it comes to making predictions? https://www.youtube.com/watch?v=A1Ghrd7NBtk&t=1181s