About
Activity
-
The power of GPT-4o in the palm of our hands. It's a monumental day - for the first time in history, open weights catch up with the latest frontier…
The power of GPT-4o in the palm of our hands. It's a monumental day - for the first time in history, open weights catch up with the latest frontier…
Liked by Sina Ehsani
-
In my experience, the accuracy gap between open-source and proprietary is negligible now and open-source is cheaper, faster, more customizable &…
In my experience, the accuracy gap between open-source and proprietary is negligible now and open-source is cheaper, faster, more customizable &…
Liked by Sina Ehsani
-
Thank you Meta and the Llama team for your huge contributions to open-source! Llama 3.1 with increased context length and improved capabilities is a…
Thank you Meta and the Llama team for your huge contributions to open-source! Llama 3.1 with increased context length and improved capabilities is a…
Liked by Sina Ehsani
Experience & Education
Patents
-
Systems and methods for ontology matching
Filed US 20240087687
Systems and methods for aligning ontologies, such as a medical or related ontologies, are disclosed. Initially, ontology specifications are received, such as ontologies comprising a root node and a plurality of child nodes. Each node is assigned at least one synthetic identifier corresponding to its path(s) to the root node. In some cases, nodes may be clustered using one or more clustering algorithms. A translation model is pre-trained by applying one or more masked language models to the…
Systems and methods for aligning ontologies, such as a medical or related ontologies, are disclosed. Initially, ontology specifications are received, such as ontologies comprising a root node and a plurality of child nodes. Each node is assigned at least one synthetic identifier corresponding to its path(s) to the root node. In some cases, nodes may be clustered using one or more clustering algorithms. A translation model is pre-trained by applying one or more masked language models to the ontologies and the synthetic identifiers. Subsequently, each ontology is augmented by identifying nodes in different ontologies that match and assigning label and/or other details across different ontologies. The translation model can then be fine-tuned using the augmented data. The fine-tuned translation model is then used to identify corresponding nodes in target ontologies in response to translation requests.
Courses
-
Advanced Operating Systems
CSC 552
-
Algorithms for NLP
CSC 585
-
Applications of Machine Learning
-
-
Artificial Intelligence for Health and Medicine
-
-
Computational Linguistics
CSC 538
-
Design and Analysis of Algorithms
CSC 545
-
Engineering Statistics
SIE 530
-
Enterprise Data Management
MIS 531
-
Fundamentals of Optimization
SIE 545
-
Neural Networks
INFO 557
-
Principles of Machine Learning
CSC 580
-
Software Engineering
CSC 536
-
Stochastic Process modeling
-
-
Text Retrieval And Web Search
CSC 583
Languages
-
Persian
Native or bilingual proficiency
-
English
Full professional proficiency
-
Russian
Elementary proficiency
-
Spanish
Elementary proficiency
-
Arabic
Elementary proficiency
More activity by Sina
-
Among the most impressive aspect of today's Llama 3.1 release is the accompanying research paper! Close to 100 pages of deep knowledge-sharing on…
Among the most impressive aspect of today's Llama 3.1 release is the accompanying research paper! Close to 100 pages of deep knowledge-sharing on…
Liked by Sina Ehsani
-
Probably the craziest week in Open Source AI (yet): 1. Mistral (in collaboration with Nvidia) dropped Apache 2.0 licensed NeMo 12B LLM, better than…
Probably the craziest week in Open Source AI (yet): 1. Mistral (in collaboration with Nvidia) dropped Apache 2.0 licensed NeMo 12B LLM, better than…
Liked by Sina Ehsani
-
Apple has entered the game! Apple just released a 7B open-source LLM, weights, training code, and dataset! 👀 TL;DR: 🧠 7B base model, trained on…
Apple has entered the game! Apple just released a 7B open-source LLM, weights, training code, and dataset! 👀 TL;DR: 🧠 7B base model, trained on…
Liked by Sina Ehsani
-
Let’s goooo! Nvidia & Mistral release Mistral NeMo 12B 🔥 > Apache 2.0 licensed w/ 128K contact > Beats Llama 3 8B, Gemma 2 9B > Multilingual - EN…
Let’s goooo! Nvidia & Mistral release Mistral NeMo 12B 🔥 > Apache 2.0 licensed w/ 128K contact > Beats Llama 3 8B, Gemma 2 9B > Multilingual - EN…
Liked by Sina Ehsani
-
What an eventful day in Open Source LLMs today: Mistral released Codestral Mamba 🐍 > Beats DeepSeek QwenCode, best model < 10B, competitive with…
What an eventful day in Open Source LLMs today: Mistral released Codestral Mamba 🐍 > Beats DeepSeek QwenCode, best model < 10B, competitive with…
Liked by Sina Ehsani
-
FlashAttention-3 is here, and it's 1.5x-2.0x faster than Flash Attention 2! ⚡ Flash Attention is an optimized algorithm for transformer models that…
FlashAttention-3 is here, and it's 1.5x-2.0x faster than Flash Attention 2! ⚡ Flash Attention is an optimized algorithm for transformer models that…
Liked by Sina Ehsani
-
#Truveta now offers regulatory and audit capabilities to support real-world evidence (#RWE) submissions to the Food and Drug Administration (#FDA)…
#Truveta now offers regulatory and audit capabilities to support real-world evidence (#RWE) submissions to the Food and Drug Administration (#FDA)…
Liked by Sina Ehsani
-
No, Moshi doesn’t beat GPT4o, but, 1. It’s small, 7B model (14GB VRAM in bf16/ fp16, 7GB in fp8/ int8) - Can be quantised further to run in even…
No, Moshi doesn’t beat GPT4o, but, 1. It’s small, 7B model (14GB VRAM in bf16/ fp16, 7GB in fp8/ int8) - Can be quantised further to run in even…
Liked by Sina Ehsani
Other similar profiles
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More