Sign in to view Erik’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
San Francisco Bay Area
Contact Info
Sign in to view Erik’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
584 followers
500+ connections
Sign in to view Erik’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Erik
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Erik
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Erik’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View Erik’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Ray He
Menlo Park, CAConnect -
James Yopp
San Francisco Bay AreaConnect -
Ronan Le Boïté
Greater BostonConnect -
Anwar Ghuloum
San Francisco Bay AreaConnect -
Joe Gonzalez
Software Engineer at Facebook
Redmond, WAConnect -
Nate Voorhies
Software Engineer at Google
San Francisco Bay AreaConnect -
Svet Ganov
Mountain View, CAConnect -
Yong Wang
Engineer Manager at Google
San Francisco Bay AreaConnect -
Brandon Salazar
Seattle, WAConnect -
Sergey Nikolaienkov
ZurichConnect -
Clark Scheff
Chief Technical Monster, Maniacal Monster Studios
Seattle, WAConnect -
Chen Wu
Jr. Developer
Irvine, CAConnect -
Yohan Kim
United Technologies
Atlanta, GAConnect -
Lori Hokeness
San Francisco Bay AreaConnect -
Nancy Zheng
Mountain View, CAConnect -
Krishna Sharma
MunichConnect -
☕️ Samuel Griff
New York, NYConnect -
Xuetao Yin
Greater Seattle AreaConnect -
Zhou Yu
Santa Clara, CAConnect -
Carlos Augusto Mendoza Sanchez
Redmond, WAConnect
Explore more posts
-
Anshul Bhide
A couple of recent college graduates from India against AI finetuning giants like Anyscale and Run:ai? I met Arko C at a NASSCOM event last October. He was building Xylem AI (YC S24) , a platform that allows you to train and deploy LLMs in production. So naturally most Indian VCs rejected him. Because how could a team of three recent college grads compete with the likes SV startups like Anyscale that have raised hundreds of millions of dollars, have teams of experienced AI researchers and a stash of H100 GPUs? Arko hustled and closed two Fortune 500 companies for paid contracts. I personally know unicorns that haven't managed to do this. He then got into YC in the last round in May. There's a lot to still be proved out, but Arko exemplifies how valuable grit is in building startups. I’m doing a webinar with him tomorrow on challenges of using LLMs in production. Link to register in the comments. Disclaimer: I'm a small investor in Xylem.
153
5 Comments -
Jess Nall
From where I sit in the SF Bay Area, cradle of innovation for at least the last 30 years, I'm a true believer in the promise of the #AI revolution. That said, I find it fascinating to consider views on the opposing side. I'm sharing below two opinion pieces from the last two weeks suggesting that the AI revolution is fundamentally overhyped and a bubble ripe for bursting, and that AI tools are actually not useful or transformative. Contrast this with venture capitalist Kai-Fu Lee's prognostication that 50% of white collar jobs will be displaced by #AI within three years, and that OpenAI and the other cutting edge movers in the space are headed to a trillion+ dollar valuation. Which side will win out? Where will we be by 2030 in the AI race? I'd love to hear your thoughts in the comments.
7
3 Comments -
Sravan Bodapati
AI at Meta has released the most powerful open-source model yet as of today: 𝑳𝑳𝑨𝑴𝑨3 - 8B and 70B models at 8k context length! They highlight improvements in each of the following aspects as the key differentiator: a) Model Architecture b) Pretraining data c) scaling up pre-training d) Instruction fine-tuning 💥 𝐏𝐫𝐞𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 & 𝐒𝐜𝐚𝐥𝐢𝐧𝐠 𝐮𝐩: ✈️ Vocab of 128k tokens & GQA across both 8B and 70B, 8k native context length ✈️ Pretrained over 15T tokens, 4 times more code than LLAMA2 ✈️ Multi-lingual : over 5% of pretraining data consists of high-quality non English data that covers over 30 languages: don't expect same performance as English ✈️ LLAMA2 generates training data (like in Self-Instruct) to generate text classifiers that filter data powering LLAMA3 ✈️ Both 8B and 70B improve even after the model is trained on 2 orders of magnitude more data i.e log-linear improvement after 15T tokens training ✈️ Training runs on 2 custom-built 24k GPU clusters with an overall training time efficiency of 95% ✈️ 𝘖𝘷𝘦𝘳𝘢𝘭𝘭, 3 𝘵𝘪𝘮𝘦𝘴 𝘮𝘰𝘳𝘦 𝘦𝘧𝘧𝘪𝘤𝘪𝘦𝘯𝘵 𝘵𝘩𝘢𝘯 𝘓𝘓𝘈𝘔𝘈2 💥 𝑰𝒏𝒔𝒕𝒓𝒖𝒄𝒕𝒊𝒐𝒏 𝑭𝒊𝒏𝒆𝒕𝒖𝒏𝒊𝒏𝒈: ✈️ Combination of SFT + PPO + DPO + Rejection Sampling ✈️ 𝑸𝒖𝒂𝒍𝒊𝒕𝒚 𝒐𝒇 𝒑𝒓𝒐𝒎𝒑𝒕𝒔 𝒖𝒔𝒆𝒅 𝒊𝒏 𝑺𝑭𝑻, 𝒑𝒓𝒆𝒇𝒆𝒓𝒆𝒏𝒄𝒆 𝒓𝒂𝒏𝒌𝒊𝒏𝒈𝒔 𝒖𝒔𝒆𝒅 𝒊𝒏 𝑫𝑷𝑶, 𝑷𝑷𝑶 𝒉𝒂𝒔 𝒂𝒏 𝒐𝒖𝒕𝒔𝒊𝒛𝒆𝒅 𝒊𝒎𝒑𝒂𝒄𝒕 𝒐𝒏 𝒎𝒐𝒅𝒆𝒍 𝒑𝒆𝒓𝒇𝒐𝒓𝒎𝒂𝒏𝒄𝒆 ✈️ Careful data curation and multiple rounds of QA on annotations from human annotators ✈️Training on preference ranking improved the model performance greatly on coding and reasoning tasks 💥 𝑳𝑳𝑨𝑴𝑨3 𝑮𝒂𝒖𝒓𝒅𝒓𝒂𝒊𝒍𝒔: ✈️ Updated and new safety tools : LLAMA Gaurd2 & cybersecEval 2 ✈️ CodeShield - inference time gaurdrail for filtering insecure code ✈️ Instruction Fine-tuned model redteamed for safety : By generating adverserial prompts that try to elicit problematic responses. ✈️ LLAMAGaurd - foundational for prompt and response safety, can be easily finetuned to create a new taxonomy 💥 𝑰𝒏𝒇𝒆𝒓𝒆𝒏𝒄𝒆 & 𝑭𝒖𝒕𝒖𝒓𝒆 ✈️ Despite LLAMA3 having 1B more params than LLAMA2 7B, the improved tokenizer efficiency and GQA contribute to maintaining the inference efficiency on par with Llama 2 7B. ✈️ 400B+ parameters, Multilingual, Multimodal, and longer context window LLAMA3 would be available on AWS soon. Try it out! #generativeAI #llm #llama3 #aws #bedrock #sota
33
1 Comment -
Rebecca Nagel
So this new GPT-4o model (how it's written -- but she's saying "4-oh") update is really about ease of use BUT it has better input for voice, visual, etc. and less latency. And now free users aren't stuck with 3.5. FREE USERS now have access to GPTs and GPT store, greatly widening the market for GPTs. PAID USERS get 5x capacity limit. GPT-4o is now available in API -- says 2x faste and 50 percent cheaper than GPT-turbo.
-
Todd Nist
Meta Introduces Chameleon: A State-of-the-Art Multimodal Model 🦎 Meta has unveiled Chameleon, a groundbreaking multimodal model that pushes the boundaries of AI capabilities. Chameleon seamlessly integrates vision and language, enabling it to understand and generate images and text with remarkable accuracy. Key Takeaways: 🎨 Chameleon excels in both image captioning and text-to-image generation, showcasing its versatility across multiple tasks. 🧠 The model leverages a novel architecture called Mixture-of-Attention (MoA), which efficiently processes and combines visual and textual information. 🌐 Chameleon outperforms existing state-of-the-art models in zero-shot image captioning and text-to-image generation on various benchmarks. 🔍 The model's ability to handle complex, compositional prompts sets it apart from other multimodal models. 🔒 Meta has open-sourced Chameleon's code and trained weights, fostering transparency and enabling further research in the field. Chameleon represents a significant milestone in the development of multimodal AI systems. Its ability to understand and generate both images and text opens up exciting possibilities for applications in fields such as creative content generation, assistive technologies, and more. Read the full article to learn more about Chameleon and its implications for the future of AI: https://lnkd.in/gk_ZTZdR #chameleon #multimodalai #artificialintelligence #ai #machinelearning #ml
5
-
Danilo T.
if you know, you know NVIDIA annual shareholder meeting today: june 26 2024 09:00 pst in about an hour we'll have a chance to listen to new developments. it's breath-taking. the speed of not only NVIDIA's ability to navigate (well i guess they must be using their own (ai) to decide what where how to do and then some. huh?) more importantly just as Apple Microsoft defined the pervasiveness of computers. their applications. we believe NVIDIA holds that position currently. they have been handsomely rewarded in the markets. a valuation as the most valuable in the world. 3.3 trillys. and going. so every and all announcements are truly where the centre of the universe of chip design, manufacturing, exists. it's more than simply offering state of the art chip design for (ai) modelling. where Alphabet Inc. organizes the world's information. NVIDIA makes sense of it. and everyone else applies it, to their subject matter expertise. this is the change that is afoot. at this moment. where it was a race to offer advertisers of services/products a better understanding of their clientele, now the lens is sharper. there is no need to suggest age/sex/ethnicity/generational differences. there are behaviours. theses behaviours are independent of what data was used to point adverts to. to get these people to buy more sh^t. that's the base of the u.s. economy. it's apparently consumer driven. and so in this post pandemic world where it literally stopped. and we were forced to sit still. wait it out as the storm clouds passed. in hopes no one. not us. or loved ones would perish. that time was a generational shift. we all felt. it. no matter what class you were. NVIDIA is the fulcrum Google is the waterfall or Google is the head NVIDIA the neck wherever the neck turns that is where the head has their attention. the neck keeps the attention of the head there. so it appears as if in many ways Google has in essence offered the search part of it's business up for grabs. this is why Perplexity exists, and others. a new dynamic pricing auction model is about to emerge. it will as it is by nature adversarial. let's not fool ourselves. we are designed to be adversarial. it's our nature. and in coveting $$$$$ as the prime objective. money for the sake of money this will destroy a lot of thoughtful, mindful, intentionally beneficial innovation. why? well. it's not playing according to the whims of a recommendation machine that rewards those that are part of it's invisible army. today is a crowning. it's a crown. upon the head of jensen. this is akin to king charles and his crowning. we wonder who will be at the crowning. to recognize this eventful day. it's good to be king. ;) < . > source: https://lnkd.in/eumX4num
1
-
Giuseppe Manzari
On April 18th, just last week, Meta unveiled Llama 3, lauded as "the most capable openly available LLM to date". This remarkable achievement by the Meta team comes hot on the heels of Llama 2's release last July. Notably, the new models surpass their predecessors and competing offerings from other providers in terms of performance. Meta's commitment to open sourcing these powerful models while prioritizing model safety and responsible usage is commendable. (https://lnkd.in/ghv3RJRz) Yesterday, Microsoft introduced Phi-3 Mini, a member of the Phi-3 family of models, promoted as "the most capable and cost-effective small language models available". According to Microsoft, the key to achieve high performance in such a small package lies in the quality of the training data. The model has been open sourced and is now available on Ollama and Hugging Face. (https://lnkd.in/gKgSmrBn) These compact yet powerful models serve as invaluable tools for creators eager to bring their visions to life or simply experiment with cutting-edge technology, particularly for those with limited access to resources. Shortly after the release of Llama 3, reports began surfacing of enthusiasts running the 8B model on a Raspberry Pi 5 equipped with just 8GB of RAM—a modest $80 single-board computer. (https://lnkd.in/gbeQPhQ2) While the performance understandably reflects the limitations of such a platform, the fact that the model runs and generates an output is nothing short of magical. Similarly, the Phi-3 Mini model can run on ubiquitous devices such as smartphones. Reflecting on my own journey, I recall the pivotal role that early access to a low-cost computer played in shaping my passion for coding. The gift of an 8-bit Commodore 64 home computer ignited my curiosity at a young age. Later, with the guidance of an outstanding Computer Science teaching staff at my high school IISS Marconi-Hack Bari, I embarked on ambitious projects like a networked multiplayer version of Battleship written in Turbo Pascal and 8086 Assembly, complete with custom sprites and lo-res graphics. These formative experiences paved the way for my career, eventually leading me to various engineering and leadership roles. It all began with a simple, low-cost, modestly capable computer used for experimentation—a gateway to discovering my life's passion. I encourage educators and parents alike to consider setting up similar low-cost experimentation environments. By introducing younger generations to open-source AI technologies, we can nurture their creativity and help them uncover their true passions. It's not just a weekend project—it's an investment in their future and the future of innovation and discovery. #education #youth #AI #LLM #future #innovation
24
-
Antti Pasila
NVIDIA has a clear “unfair advantage” in the market right now. They are ahead of the game in so many fronts, they sit on the hardware every company working with AI needs and is the preferred partner/CVC for any AI startup at the moment. If I was Jensen Huang, I would issue 10% new stock and raise a $300bn war chest at the same time as they are doing the 1/10 split. I would take $10bn of that money, split it into 40k tickets of $250k and fund basically any new novel AI project (no ChatGPT wrappers). This way NVIDIA would have a stake in ~75% of all AI related startups and the massive amount of talent those companies attract. They would ultimately become unstoppable while still having $290bn in the war chest 🤯
24
1 Comment -
Neha Gupta
At Uniphore, we’ve developed a highly efficient fine-tuning methodology that turns smaller LLMs (and others) into truly formidable tools—and makes LLM-based AI more accessible to enterprises everywhere without sacrificing their needs for data privacy, cost and latency with high accuracy. We benchmarked fine-tuned Llama-3 8B, Llama-3 8B Vanilla and GPT-4 on eight different datasets, covering tasks such as question answering (Q&A), Q&A with structured data/tables, named entity recognition (NER), summarization and agent-based functions that are most relevant for the enterprise AI space. As a result, we achieved a 15% median increase in accuracy as compared to GPT-4 and a 26% median increase in performance as compared to Vanilla Llama-3 8B and an 11x reduction in inference cost. We also developed our in-house framework with optimized code for parameter-efficient instruction fine-tuning of LLMs using the best-in-class available toolsets along with our modifications, which led to an eight-fold reduction in the training time. This allows us and our customers to instruction fine-tune models using our APIs in just a few hours based on their datasets and tasks, ensuring they work effectively for their applications. Details here: https://lnkd.in/gST5tQNZ
137
4 Comments -
Arnab Chattopadhyay
I agree with what you say for the trajectory of AI. But we need to ponder what really happens when genAI becomes almost part of self ( something similar to what you said). My greatest fear this will dramatically exacerbate problems of obesity, addiction, enormous debt build up, social unrest etc. One percent of population who learns AI and don't get drowned in information overload will do phenomenally well. What about rest 99%. Just wait for 10 more years. You will see what extreme damage will happen ( particularly among the youth....). #artificialintelligence #ai #future #agi #asi
-
Dan I.
I was playing with the new Meta AI image gen and loved the feature they've come up with for dynamic real-time preview image generation. Started digging into how they've done it. It appears that they used the GAN approach (Generative Adversarial Networks) at least for the initial mock generation part. This is interesting as it now completes the 3 main approaches to gen AI image gen: - GAN (Meta AI ) - Transformers (DALL-E) - Diffusion (Stable Diffusion) (people who know more about this, feel free to correct) Each one obviously has some pros and cons and capabilities with it, but I like the new feature bar set by this dynamic preview capability. Oh, and by the way, Llama-3 is very impressive as well, and it works on Groq, which provides lightning fast ASIC-based inferencing!
2
-
Jay (JieBing) Yu, PhD
New iterative reasoning technique could improve RAG app accuracy by 50% (vs. Naive RAG)! Another solid example of the rapid innovation cycle on RAG research. BTW, you can quickly find out answers to question like "Why did combining RAG with Fine-Tuning and Iterative Reasoning, such as OODA, improve accuracy by 50 percent?" in Epsilla (YC S23)'s "RAG Research Paper" smart search app at: https://lnkd.in/gVgzpbue , with number of research papers included exceeding 360. #genai #rag #smartsearch #innoation
18
-
Jason Inasi
AI as an engine of wealth creation. If you joined NVIDIA 5 years ago as a mid-level product manager with an annual $77K stock grant over 4 years, just that initial grant would be worth ~$10.6M today. If you joined NVIDIA 5 years ago as a senior software engineer with an annual $102K stock grant over 4 years, just that initial grant would be worth ~$14.8M today. If you joined NVIDIA 5 years ago as a junior marketer with an annual $31K stock grant over 4 years, just that initial grant would be worth ~$4.5M today. Don’t miss this next wave of wealth creation.
9
-
Jason Stanley
Meta's Llama 3 dropped just a few days ago, but already there are close to 1,000 variants publicly available on Hugging Face. Most people read base model evaluations and think they apply to all these deployments, but that ain't the case. Or at least the extent to which it's true is poorly understood. I want to see more eval work focus on the performance and risk shadow cast by base models on their downstream progeny. Basically, if base models aren't being used as-is, do their evals matter? Or are they poorly predictive of what will come of them post-fine-tuning? I don't have time/resources to work on this right now, but I'd love to see someone take this project on: across all variants posted on HF, how do evals on fine-tuned versions compare to evals done on the original base model? What kind of relationship is there between the two? Is the latter a constraint or directional anchor for the former? The best set-up here needs to find some way of ignoring some amount of noise (e.g., devs creating their own copy but not doing any fine tuning or doing some kind of terrible fine tuning) in order to focus on more serious attempts to fine tune for stronger performance on specific use cases, stronger safety and security, or something else. This question is increasingly important because fine-tuning is becoming easier and faster to do. That means it's more and more likely that developers will tune models to their own liking. Evals done on prior upstream versions are potentially meaningless... 'potentially' is the key word because we don't have a good understanding of the persistence of performance and risk, the elasticity of learning. By the way, if you're interested in working on hard questions around LLM evaluation, like these ones, we're hiring! Our Trust & Governance research team is looking for a Staff Applied Research Scientist. Come help us tackle these challenges! Link to the job posting in the comments. #artificialintelligence #trustworthyai #llm
51
9 Comments -
Josh Levitan
1. IMO, agent patterns are one of the most interesting ways to extend LLMs. memary is based on leveraging a graph database (neo4j) for long-term memory (to get around the limited memory inherent in even the largest context windows in LLMs) and augmenting it with other tools like web search and Computer Vision. https://lnkd.in/gfFcBqJx Looks interesting. 2. More about how consolidation effects the entire healthcare industry: https://lnkd.in/gXJKqNJB The Change hack/ransomware attack not only affected claims, service, refills, etc. but now CMS & NCQA are delaying the entire HEDIS reporting process (used for Medicare/Medicaid quality ratings) due to the impact. 3. NARA bans ChatGPT use agency-wide. https://lnkd.in/gW8T94JU Federal Government agencies see this as a risk, including their information leaking and being used to train models (which may then output the wrong information or show PII/PHI other proprietary information). 4. Oh great, now I can't even be angry anymore :) https://lnkd.in/gcrN9mjx
1
1 Comment -
Renchu (Richard) Song
“... native solutions built entirely around vectors will provide the ‘speed, memory safety, and scale’ needed as vector data explodes.” “... we do advanced vector search in the best way possible.” At Epsilla (YC S23), we share the same vision and approach. Thus we have been leading the charge with our unique innovations in a 10x faster, cheaper, and better vector search engine, as the built-in foundation for our no-code RAG as a Service Platform. You can experience the accuracy and relevancy from our advanced technologies via a number of high quality AI-powered smart search and chatbot applications below: 1. Smart Search on 180+ RAG Research Papers (https://lnkd.in/eMPdVae5): This smart search app leverages our RAG platform and vector database technology to help user navigate through the massive 180+ papers with ease and keep up with the latest and greatest advanced research on this important topic. . 2. AI Assistant Chatbot for Knowledge Graphs (https://lnkd.in/eeBTFe_Q): Inspired by insights from Mike Dillinger's posts, this AI assistant chatbot leverages our RAG platform and vector database technology to generate highly relevant, nuanced and contextually aware responses to help user stay ahead of the game on topics like LLM, knowledge graph, and GenAI. 3. AI Assistant Chatbot for Taylor Swift (https://lnkd.in/e_m6dB4P): Powered by our RAG platform and vector database technology, this interactive AI-powered chatbot can answer any questions about Taylor Swift, one of the most impactful artists and successful business women in the world. These high quality AI applications underscore the fact that Epsilla's vector database technology is not just keeping up with the demands of generative AI, but also setting new standards for what is possible. In addition, our no-code RAG platform allows AI app builders to develop and deploy high quality apps like these within hours. Join us as we continue to innovate and lead in this exciting era of AI-driven applications! (https://epsilla.com) #VectorDatabase #RAG #Chatbots #GenerativeAI #NoCodePlatforms
31
8 Comments -
Alon Faktor, PhD.
Hey, I'd like to give a shout-out to ClearML for helping us at Vimeo develop AI-based systems. We have been happy customers for a few years now and I'd like to share a bit about how we use ClearML. We use ClearML Datasets to cache a dataset of video transcripts, and run testing loading directly from ClearML Datasets. This allows us to simultaneously speed up the data loading and ensure consistency. Moreover we use ClearML to save our benchmark annotations in one place and track our system performance on the benchmark every-time we run an experiment or change our prompts. ClearML allows us to see the different parameters and prompts that were used for each experiment and to monitor improvements or regressions in our performance. We also use ClearML to run large-scale tests and help with statistical evaluation of our methods. For example, we developed a RAG (Retrieval Augmented Generation) Q&A system and wanted to verify that the LLM will not answer certain questions or user queries that are outside the scope of the video. We used ClearML to collect and analyze the RAG responses on many videos for predefined user queries that were outside the scope of the videos and got good visibility into the performance of the system. Also, the comparison feature on ClearML is great for tracking the improvement of our metrics along the progressing versions of our systems.
31
1 Comment -
Dmitriy Pavlov
Well, with GenAI the crystal writing skills of Meta’s Distinguished Engineers are within reach of anyone with decent prompting skills. a few million extra salary maybe harder to materialize but GenAI can give career advice and create business strategies. It’s a startup small force multiplier. After all of that, perhaps you shall start your own Meta.
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More