„I had a chance to get to know Grzegorz, while he was working as a Finance Portfolio Director for Netguru. While our daily cooperation has been limited (different Teams), we had few projects where we had a chance to work together. During this time, Grzegorz has clearly shown not only really deep knowledge from the data engineering and data science areas, but also skills and competencies as a good leader and well-oriented business professional. He was always able to predict and present the risks for the projects, as well as at least a few ways how to address them in an excellent way. Excellence - in general - and critical thinking is what you can expect from cooperation with Grzegorz. I highly recommend this person for innovative and ambitious projects anyhow (but deeply) related to the data, AI (also Generative-AI) and ML.”
Zaloguj się, aby wyświetlić pełny profil użytkownika Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Suszec, Woj. Śląskie, Polska
Informacje kontaktowe
Zaloguj się, aby wyświetlić pełny profil użytkownika Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
5 tys. obserwujących
500+ kontaktów
Zaloguj się, aby wyświetlić pełny profil użytkownika Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Wyświetl wspólne kontakty z użytkownikiem Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Wyświetl wspólne kontakty z użytkownikiem Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Zaloguj się, aby wyświetlić pełny profil użytkownika Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Informacje
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Aktywność
Zaloguj się, aby wyświetlić pełny profil użytkownika Grzegorz Mrukwa
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
-
In 2023, 61% of companies did not achieve their revenue targets. Clari surveyed 400+ revenue teams & professionals to find out why: - Hidden…
In 2023, 61% of companies did not achieve their revenue targets. Clari surveyed 400+ revenue teams & professionals to find out why: - Hidden…
Polecane przez: Grzegorz Mrukwa
Doświadczenie i wykształcenie
-
Clari
******* ******** *********** *******
-
*******
*********** ********** ******** | *******
-
*******
******* *** ***
-
************ Ś*ą*** * *********
****** ** ********** - *** ******* *********** **** ********
–
-
************ Ś*ą*** * *********
******** **ż***** (*** **ż.) *******, *********** *** *********** ***********
–
Wyświetl pełne doświadczenie użytkownika Grzegorz Mrukwa
Zobacz jego/jej stanowisko, okres zatrudnienia i więcej.
Witamy ponownie
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
lub
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Licencje i certyfikaty
Otrzymane rekomendacje
16 osób poleciło użytkownika Grzegorz Mrukwa
Dołącz teraz, aby wyświetlićWyświetl pełny profil użytkownika Grzegorz Mrukwa.
Sign in
Stay updated on your professional world
Klikając Kontynuuj, aby dołączyć lub się zalogować, wyrażasz zgodę na warunki LinkedIn: Umowę użytkownika, Politykę ochrony prywatności i Zasady korzystania z plików cookie.
Jesteś nowym użytkownikiem LinkedIn? Dołącz teraz
Inne podobne profile
-
Mateusz Opala
Cracow Metropolitan AreaNawiąż kontakt -
Mateusz Czajka
Poznań i okoliceNawiąż kontakt -
Michal Gaworski
KrakówNawiąż kontakt -
Joanna Polanska
GliwiceNawiąż kontakt -
Martin Burlinski
Cracow Metropolitan AreaNawiąż kontakt -
Mateusz Rzeszutek
KrakówNawiąż kontakt -
Marta Zawilińska
KrakówNawiąż kontakt -
Jakub Łątkiewicz
Cracow Metropolitan AreaNawiąż kontakt -
Konrad Madej
PolskaNawiąż kontakt -
Kuba Barć
Cracow Metropolitan AreaNawiąż kontakt -
Katarzyna Czaja
KatowiceNawiąż kontakt -
Joe Heins
Maple Valley, WANawiąż kontakt -
Bartosz Pranczke
PoznańNawiąż kontakt -
Danielle Mathis (She/Her)
San Diego, CANawiąż kontakt -
Dominik Barwacz
KrakówNawiąż kontakt -
Dominika Szenkelbach
KatowiceNawiąż kontakt -
Krystian Bergmann
PoznańNawiąż kontakt -
Swadesh Roul
BangaluruNawiąż kontakt -
Wojciech Czajkowski
Miami, FLNawiąż kontakt -
Radek Zaleski
Cracow Metropolitan AreaNawiąż kontakt
Odkryj więcej publikacji
-
LlamaIndex
We have out-of-the-box support in LlamaIndex for Snowflake SOTA arctic-embed models (which outperform other leading embedding models in benchmarks) - it's a one-liner through our Hugging Face integration! And in ~4 more lines of code, you can build a RAG indexing pipeline over your data with these embedding models. Check out the screenshot below for how to get up and running with this - we might have more detailed benchmarks soon, stay tuned 💡
215
3 komentarze -
Addepto
A few months ago, we shared our collaboration with an online higher education institution. Our Senior MLOps Engineer, Bart Grasza, Data Scientist, Kamil Abram, and Senior Data Engineer, Mateusz Kijewski developed an MLOps platform that provides a foundational template. 🙌 This platform integrates MLOps best practices, automates the setup and update processes, and allows for seamless workflow adaptation across different environments. You can now see this platform in action, presented at the Databricks Data Summit: https://lnkd.in/dUbJrbVi
23
-
dbt Labs
You know a Semantic Layer would be hugely valuable, but how do you actually build such a thing? Pro tip: Crafting a Semantic Layer is about building iterative velocity alongside accuracy, so that when your stakeholders ask about Revenue MoM grouped by Attribution Channel, you can answer instead of adding a ticket to the backlog. Start with these four steps: 1. Identify a Data Product that is impactful: Find something that is in heavy use and high value, but fairly narrow scope. Don’t start with a broad executive dashboard that shows metrics from across the company because you’re looking to optimize for migrating the smallest amount of modeling for the highest amount of impact that you can. For example, a good starting place would be a dashboard focused on Customer Acquisition Cost (CAC) that relies on a narrow set of metrics and underlying tables that are nonetheless critical for your company. 2. Catalog the models and their columns that service the Data Product, both in dbt and the BI tool, including rollups, metrics tables, and marts that support those. Pay special attention to aggregations as these will constitute metrics. 3. Melt the frozen rollups in your dbt project, as well as variations modeled in your BI tool, into Semantic Layer code. 4. Create a parallel version of your data product that points to Semantic Layer artifacts, audit, and then publish. Creating in parallel takes the pressure off, allowing you to fix any issues and publish gracefully. You’ll keep the existing Data Product as-is while swapping the clone to be supplied with data from the Semantic Layer. Dig deeper into the step-by-step process of how to ship a Semantic Layer in pieces at our link in the comments.
370
26 komentarzy -
Monte Carlo
There was a whole lot of talk about GenAI at the Snowflake and Databricks Summits this year, but don't be fooled – the star of both shows wasn't actually the flashy LLM demos. It was data enablement. The announcements that made the biggest splash this year were the ones that focused on the new ways Snowflake and Databricks customers can enable data for a given use-case. The features that will help them get the right tools in place to deliver structured, governed, and reliable data into a modern data platform. So what were these announcements? We compare and contrast the events to understand the biggest news, what it says about the state of the data industry, and why data will always be the star of AI. Check it out: https://lnkd.in/eYA3kCEb #GenAI #DataCloudSummit #DataAISummit #dataquality #dataenablement
23
1 komentarz -
DuckDB
New blog post by Alex Monahan: Benchmarking Ourselves over Time at DuckDB The DuckDB team's philosophy is to ensure correctness first, then iterate and optimize to improve performance. This blog explores how this happened over the last three years when DuckDB became approximately 3-25x faster and 10x more scalable. Read more at https://lnkd.in/dnQ7Nya8
197
16 komentarzy -
Diggibyte Technologies Private Limited
Have you heard of the new feature of Databricks called DBRX? DBRX is one of the State-of-the-art Language Models from Databricks. A language Model is a computer program that has been fed enough examples to be able to recognize and interpret human language or other types of complex data. Know more by reading this blog: https://lnkd.in/gWaRMN2i #databricks #drbx #databricksfeature #languagemodel #llm #program #human #data #dataengineering William Rathinasamy Sekhar Reddy Anuj Kumar Sen Lawrance Amburose Brindha Sendhil Praveen Kumar C Rashika S Parthiban Raja Harshith R
20
-
AtomicJar, Inc. (acquired by Docker)
Learn How to Run Hugging Face Models Programmatically Using Ollama and #Testcontainers 🤖💻 Dive into the future of AI/ML with easy model deployment 👋👋 Say goodbye to complex setups and hello to seamless integration 👉👉 Docker, Inc blog https://lnkd.in/g_F89xCk by Ignacio López Luna
8
-
Keboola
📣 DATA SCIENTISTS & ENGINEERS 📣 We've compared Snowpark and Spark for you. Is it time for a switch? The results speak for themselves… Snowpark outperformed Spark in both speed and cost for most data engineering and machine learning tasks. It’s also easier to set up, eliminates a whole step from the data processing lifecycle, and is more efficient with larger datasets. We know which one we’re choosing… 😏 Check out our full benchmarking test and see how the results compare: https://lnkd.in/ecKsfDRq
259
1 komentarz -
Jesse Anderson
“Everybody's got to have a little bit of data engineering and their skill set because every application provides data even if it's not directly, it'll create indirectly with logs because you always have to still monitor your application so there's always going to be some kind of data in somewhere and being able to share and provide that data to its consumers. It could be hard, could be easy and they can be demanding so you want to be prepared on how you're going to provide that data.” If you’re thinking about getting into real-time or big data, here’s a good point from Hubert Dulay during our conversation on the Unapologetically Technical podcast. Make sure to tune in to learn more! Watch the full episode here: https://lnkd.in/dSVJU6WW
-
Mixpanel
We've added AI to the search tool in Mixpanel Docs 🧠 Now you can get instant answers to technical questions about Mixpanel, like: • What should be the first event I track? • How can I import Snowflake Data into Mixpanel? • What is a lookup table? Give it a try at docs.mixpanel.com and let us know what you think!
30
2 komentarze -
Shelf
Challenged by RAG systems that sometimes miss the mark? ✖ The culprit is often the unstructured data you feed to your system. We’ve unpacked strategies to use data enrichment to hypercharge your RAG's precision. 🚀 Here's how you can transform confusion into clarity: 🎯 Named Entity Recognition for pinpoint accuracy 🔑 Keyword generation to give structure to unstructured data 🤔 Topic modeling for thematic insight 🔗 Link contextualization for richer connections Data isn't just fuel—it's the GPS for your AI's journey. Refine it, and watch your RAG system improve accuracy and precision. Your AI isn't lost—it just needs better directions. 🧭 https://lnkd.in/eu8fMPhh #AI #DataEnrichment #RAG #MachineLearning #DataScience #ArtificialIntelligence
12
-
Okube
Good morning data community! Okube presents a tutorial on how to use Laktory data sources. Check it out to learn how you can simplify read data from event files or tables in isolation or in the context of data pipelines. What kind of data sources would you like to be supported next? Let us know in the comments below. https://lnkd.in/e3WF2Muw 🔗
1
-
Learn Data Engineering
Looking for a Redshift alternative that just makes sense? I once… ...got asked if Spark can replace a data warehouse like Amazon Redshift. One alternative to Redshift that is out there and that makes sense to use is Databricks. Because Spark itself is just the processing, right? But what they did on Databricks, which I really like, is that you can actually build a warehouse yourself. You can start creating tables, you can ingest data for instance from files, run ETL jobs all within Spark. You can also put the data into tables and then you can either make it accessible through the compute cluster to external clients or you can even generate a separate data warehouse with separate resources for clients to come in and actually create reports of the data. So, Databricks it is ;) Try it out! In my Academy you find a great course to dive right into it. #dataengineer #dataengineering #datascience #bigdata #databricks #spark
5
-
Smart Data Warehouse Solutions
Data Engineering: Working with semi-structured data types. In our previous post, we demonstrated how to parse comma-separated data into a structured file format within the Snowflake project that utilizes the medallion architectural framework. In this post, we’ll introduce another file format that you’ll frequently encounter in your data engineering projects: the JSON File Format. JSON (JavaScript Object Notation) is an open standard file format used for sharing data. It employs human-readable text to store and transmit data objects, which consist of attribute-value pairs and arrays. This file format is commonly used for transmitting data between web applications and servers, making it very popular and suitable for our discussion and demonstration. To explain in more detail, we have three common types of JSON: JSON objects, JSON nested objects, and JSON arrays. JSON objects: “User_Name” :{ “Fist_Name”: “Henry”, “Last_Name”: “Godson” } Nested JSON objects: “User_Informations”: { “Date”: “2024-05-21”, “Sports”: {“Football”: “Good”, “Swimming”: “Excellent” } “Name”: “Henry } JSON arrays: (JSON are written inside the square brackets). “Employees”: [ { “First_Name”: “Henry”, “Last_Name”: “Godson” }, { “First_Name”: “Endy”, “Last_Name”: “Junior” }, ] We also have JSON Array in the simple format as “Favourit_Sports”: [“Football”, “Baseball”]. While certain data migration tools, such as Azure Data Factory, possess built-in intelligence for transforming data from one format to another, most data engineering ETL projects necessitate data engineers to perform similar transformations using either SQL or Python. To transform a JSON array using the Medallion Architectural Framework, we have two major approaches: Copy to Bronze and Transform: In this approach, we copy the JSON file into the Bronze layer and then transform the data to a structured format before moving it into the Silver table. Direct Transformation to Bronze: The second approach involves transforming the data directly to a structured format and depositing it into the Bronze layer straight from the staging environment. This approach is recommended, and we’ll demonstrate how to achieve it in our upcoming post. Stay tuned!
3
-
TetraScience
The TetraScience team joined the 60,000 other data practitioners at last week’s Databricks #DataAISummit in San Francisco. (In case you missed it, TetraScience announced a strategic partnership with Databricks a few weeks ago to accelerate the Scientific AI revolution.) We had some great conversations at the summit with leaders in data and analytics in the life sciences. The intel we gained from our discussions and what we heard during the keynotes and breakouts can boil down into three essential takeaways worth sharing with the TetraScience community. The tl:dr? - Every business wants to be an AI business - Nearly all organizations are still in the early stages of their AI journey - The state and quality of enterprise data continues to be the big stumbling block Read more takeaways from Naveen Kondapalli, SVP Product & Engineering at TetraScience, including insights from sessions with Sander Timmer, PhD at GSK and Pushpendra Arora at Merck: https://lnkd.in/eGq7NJ_p
44
-
HealthDataViz
Check out HealthDataViz's latest blog article "Improve the UX of Tableau Filter Selections with a Dynamic Error Message" https://hubs.li/Q02Bm-ml0. In this blog Katie Bueno shares a trick for handling user messaging when a chart goes blank due to advanced filtering. #Tableau #dataviz #BlogPost
3
-
Open Data Blend
Polars recently made a few updates to its latest TPC-H benchmark results. After benchmarking several libraries like Polars, Pandas, PySpark, DuckDB, Dask, and Modin, it was found that Polars is the best raw performer. This performance can be compared to DuckDB's in terms of speed. Want to learn more about the TPC-H benchmark update? Read more here: https://lnkd.in/eHfWKwAz #Polars #Python
1
-
Forwrd.ai
Problem. Solved. ✅ ✅ ✅ How to handle incomplete values in your prediction models? Cross-cloud data streams update instantly, in real time. The more data that comes into the model, the more robust and accurate your predictive modeling becomes. However simultaneous data entries from multiple streams can lead to empty values and incomplete data within the datasets you use for your models. After analyzing billions of data points and building hundreds of predictive AI models, we encountered and identified incomplete datasets as a challenge within the workflow of our builders. This week Forwrd launched a workflow within its automated data science capabilities to reshape the precision of models built when empty fields are present within datasets. Forwrd now allows builders to choose and base their approach to empty fields based on their business use case. You can decide whether the model you are building should assign a value to the empty field, decide what the value should be or disregard the empty value. To read more and learn about the different use cases Forwrd has encountered while building models for its clients check out the article below. https://lnkd.in/eB_rEdZa Forwrd continues to be your predictive superhero 🦸 🤖 🦾 in simplifying data science workflows and bringing predictive AI to GTM teams worldwide. #bepredictive #predictiveai #predictiveanalytics #AIBuilders #GTM #revops
12