"If you've deployed Kubernetes to 10 machines and are running workloads on it, we show you that of those 10 machines, you're only consuming 30% of them," said OpenCost Senior Community Manager Matt Ray in his chat with Open at Intel's Katherine Druckman. "This is common. People over provision by a lot. Even if you're on premise, that's consuming resources that you didn’t expect." Read the full transcript: https://intel.ly/3Yll6h2
About us
Open.Intel's mission is to educate, inspire, and connect the open ecosystem community to achieve more, together. We're committed to fostering an inclusive and vibrant open source community where developers can work together to advance technology in a way that's transparent, secure, and accessible to all. Join us!
- Website
-
open.intel.com
External link for Open.Intel
- Industry
- Semiconductor Manufacturing
- Company size
- 10,001+ employees
Updates
-
Open.Intel reposted this
Get started with text generation using this AI sample code. You’ll see how to train a model using LSTM and the Intel Extension for TensorFlow on Intel GPUs. The main goal of the text generation model is to predict the probability of the next word in a sequence when the previous words are given as input. See the code: https://intel.ly/3Y3c603 #ArtificialIntelligence #TensorFlow #LSTM
-
-
🎤 ICYMI! Intel's Katherine Druckman spoke with Devin Stein, founder of Dosu, about his AI platform, which is designed to ease the burden of software maintenance using AI automation. Used by CNCF projects and others, Stein's platform aims to automate tedious tasks like issue labeling to free maintainers to work on the tough problems. Subscribe in your favorite podcast app or check it out at: https://intel.ly/4bWOmOv #Podcast #NewEpisode #OpenSource #CloudNative #Security
-
Open.Intel reposted this
Learn the best practices and tools for building high-performance GenAI applications on budget-friendly Intel Arc graphics cards in this webinar hosted by Intel engineers. The session focuses on how to implement high-performing generative AI applications using Stable Diffusion, Llama3 quantization, and Intel-optimized extensions for PyTorch and Transformers. Register now: https://intel.ly/4bIWbqW #ArtificialIntelligence #GenerativeAI #StableDiffusion
-
-
Open.Intel reposted this
The development of standards and best practices must accompany new innovations in AI to ease deployment and adoption of the technology in a responsible manner. Intel Corporation is committed to advancing AI responsibly and is proud to join the Coalition for Secure AI (#CoSAI) as a founding member along with Google, IBM and other industry leaders. Hosted by the open standards body OASIS Open, the initiative aims to foster a collaborative ecosystem that provides all practitioners and developers the guidance and tools to create #AI systems that are secure by design. https://lnkd.in/g9BbrBUE
-
-
👋 Join us July 18 for a new, hands-on, live coding series hosted by Intel Open Source AI Evangelist Ezequiel Lanza with special guest William Galindez Arias from GitLab. #opensource #ML #AI #MachineLearning #CloudNative #LLM
Code & Deploy: Build and Deploy an ML Binary Classifier
www.linkedin.com
-
Hey, friends! This is happening TODAY at 1:00 EST. We hope to see you there!
👋 Join us July 18 for a new, hands-on, live coding series hosted by Intel Open Source AI Evangelist Ezequiel Lanza with special guest William Galindez Arias from GitLab. #opensource #ML #AI #MachineLearning #CloudNative #LLM
Code & Deploy: Build and Deploy an ML Binary Classifier
www.linkedin.com
-
🎙️🔐 Tune in to the latest Open at Intel podcast! Host Katherine Druckman chats with OKTA's Andres Aguiar about #OpenFGA, data sync challenges, and the future of auth models. #DevSecOps Read the transcript: https://intel.ly/3W0F7Xy
Fine-Grained Authorization with OpenFGA
intel.com
-
Ezequiel Lanza and Eduardo Rojas Oviedo have been publishing a series of practical articles on our Intel Tech channel on Medium focused on using and optimizing retrieval augmented generation (RAG) to improve your LLM results. Don't miss their latest article published yesterday on how to boost your RAG system’s accuracy by adding a reranker to select the most relevant context chunks.
🚀 𝐍𝐄𝐖 𝐑𝐀𝐆 𝐛𝐥𝐨𝐠 𝐩𝐨𝐬𝐭!! 🚀 I'm excited to share our new blog post (with Eduardo Rojas Oviedo) on Medium: "Improve Your Tabular Data Ingestion for RAG with Reranking". 📊✨ In this post, we delve into the intricacies of enhancing your tabular data ingestion process using reranking techniques. 🔍 𝑾𝒉𝒂𝒕 𝒚𝒐𝒖'𝒍𝒍 𝒍𝒆𝒂𝒓𝒏: -The importance of reranking in tabular data ingestion. -Practical steps to implement reranking in your RAG pipeline. -Real-world examples and applications to help you get started. Check it out and let us know your thoughts! 🔗 Read the full article https://lnkd.in/g4Ns-Sej #DataScience #AI #MachineLearning #RAG #LLM #Reranking
Improve your Tabular Data Ingestion for RAG with Reranking
medium.com
-
ICYMI! Couldn't attend yesterday's OPEA Community Day? We've got you covered.
Check out the recording, transcript, and slides from the #OPEA community event #genai #llm #opensource #demo https://lnkd.in/eBv6PZTn
OPEA Community Day - July 16th - LF AI Foundation - Confluence
wiki.lfaidata.foundation