NGC offers a collection of cloud services, including NVIDIA NeMo , BioNemo, and Riva Studio for generative AI, drug discovery, and speech AI solutions, and the NGC Private Registry for securely sharing proprietary AI software.
NVIDIA NGC™ is the portal of enterprise services, software, management tools, and support for end-to-end AI and digital twin workflows. Bring your solutions to market faster with fully managed services, or take advantage of performance-optimized software to build and deploy solutions on your preferred cloud, on-prem, and edge systems.
NGC offers a collection of cloud services, including NVIDIA NeMo , BioNemo, and Riva Studio for generative AI, drug discovery, and speech AI solutions, and the NGC Private Registry for securely sharing proprietary AI software.
The NGC catalog provides access to GPU-accelerated software that speeds up end-to-end workflows with performance-optimized containers, pretrained AI models, and industry-specific SDKs that can be deployed on premises, in the cloud, or at the edge.
AI framework and SDK containers are constantly optimized and released on a monthly basis to deliver faster time to solution.
State-of-the-art pretrained models help you build custom models faster for computer vision, speech AI, and more.
Hundreds of Jupyter Notebooks let you understand, customize, test, and build models faster, while taking advantage of best practices.
Containers undergo monthly security scans, and detailed reports are provided to ensure containers meet your company’s security policy.
Test drive the models directly from your browser, integrate them into your applications using APIs, or download and run on your Windows machine without any setup.
From HPC to conversational AI to medical imaging to recommender systems and more, NGC Collections offer ready-to-use containers, pretrained models, SDKs, and Helm charts for diverse use cases and industries—in one place—to speed up your application development and deployment process.
Language modeling is a natural language processing (NLP) task that determines the probability of a given sequence of words occurring in a sentence.
Recommender systems are a type of information filtering system that seeks to predict the "rating" or "preference" a user would give to an item.
Image segmentation is the field of image processing that deals with separating an image into multiple subgroups or regions that represent distinctive objects or subparts.
Machine translation is the task of translating text from one language to another.
Object detection is about, not only detecting the presence and location of objects in images and videos, but also categorizing them into everyday objects.
Automatic speech recognition (ASR) systems include giving voice commands to an interactive virtual assistant, converting audio to subtitles on an online video, and more.
Speech synthesis or text-to-speech is the task of artificially producing human speech from raw transcripts. Text-to-speech models are used when a mobile device converts text on a webpage to speech.
High-performance computing (HPC) is one of the most essential tools fueling the advancement of computational science, and that universe of scientific computing has expanded in all directions.
NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software that enables organizations to solve new challenges while increasing operational efficiency.
Take advantage of software containers that are scanned monthly to reduce security concerns and AI models that provide details around bias, explainability, safety, security, and privacy.
Accelerate time to production with assembled solutions that include AI frameworks, pretrained models, Helm charts, Jupyter Notebooks, and more.
Build AI solutions and manage the lifecycle of AI applications with global enterprise support that ensures your business-critical projects stay on track.
Software from the NGC catalog runs on bare-metal servers, Kubernetes, or on virtualized environments and can be deployed on premises, in the cloud, or at the edge—maximizing utilization of GPUs, portability, and scalability of applications. Users can manage the end-to-end AI development lifecycle with NVIDIA Base Command™.
NGC catalog software runs on a wide variety of NVIDIA GPU-accelerated platforms, including NVIDIA-Certified Systems™, NVIDIA DGX™ systems, NVIDIA TITAN- and NVIDIA RTX™-powered workstations, and virtualized environments with NVIDIA AI Enterprise.
Software from the NGC catalog can be deployed on GPU-powered instances. The software can be deployed directly on virtual machines (VMs) or on Kubernetes services offered by major cloud service providers (CSPs). NVIDIA AI software makes it easy for enterprises to develop and deploy their solutions in the cloud.
As computing expands beyond data centers and to the edge, the software from NGC catalog can be deployed on Kubernetes-based edge systems for low-latency, high-throughput inference. Securely deploy, manage, and scale AI applications from NGC across distributed edge infrastructure with NVIDIA Fleet Command™.
The NGC software catalog provides a range of resources that meet the needs of data scientists, developers, and researchers with varying levels of expertise, including containers, pretrained models, domain-specific SDKs, use case-based collections, and Helm charts for the fastest AI implementations.
Collections makes it easy to discover the compatible framework containers, models, Juptyer notebooks, and other resources to get started in AI faster. The respective collections also provide detailed documentation to deploy the content for specific use cases.
NGC catalog offers ready-to-use collections for various applications, including NLP, ASR, intelligent video analytics, and object detection.
The NGC catalog hosts containers for the top AI and data science software, tuned, tested, and optimized by NVIDIA. Fully tested containers for HPC applications and data analytics are also available, allowing users to build solutions from a tested framework with complete control.
The NGC catalog hosts pretrained GPU-optimized models for a variety of common AI tasks that developers can use as-is or retrain them easily, thus saving valuable time in bringing solutions to market. Each model comes with a model resume outlining the architecture, training details, datasets used, and limitations. NVIDIA AI Foundation models enable developers to experience the models directly in their browsers, integrate them into applications using APIs or download and run them on Windows machines with RTX GPUs.
The NGC catalog hosts tutorial Jupyter notebooks for a variety of use cases—including computer vision, natural language processing, and recommendation—to give developers a head start in building AI models. It also provides the flexibility to modify the notebooks and build custom solutions.
Containers, models, and SDKs from the NGC catalog can be deployed on a managed Jupyter Notebook service with a single click.
Helm charts automate software deployment on Kubernetes clusters. The NGC catalog hosts Kubernetes-ready Helm charts that make it easy to consistently and secure deploy both NVIDIA and third-party software.
NVIDIA GPU Operator is a suite of NVIDIA drivers, container runtime, device plug-in, and management software that IT teams can install on Kubernetes clusters to give users faster access to run their workloads.
The NGC catalog features NVIDIA TAO Toolkit, NVIDIA Triton™ Inference Server, and NVIDIA TensorRT™ to enable deep learning application developers and data scientists to re-train deep learning models and easily optimize and deploy them for inference.
Learn how DeepZen, an AI company focused on human-like speech with emotions, leverages the NGC catalog to automate processes such as audio recordings and voiceovers.
Learn how AI startup Neurala speeds up deep learning training and inference for their Brain Builder platform by 8X.
Learn how Clemson University’s HPC administrators support GPU-optimized containers to help scientists accelerate research.
Learn how the University of Arizona employs containers from the NGC catalog to accelerate their scientific research by creating 3D point clouds directly on drones.
Accelerate Your Workflow with the NGC Catalog
NVIDIA partners offer a range of data science, AI training and inference, high-performance computing (HPC), and visualization solutions.
Learn how to publish your GPU-optimized software on the NGC catalog.
Read about the latest NGC catalog updates and announcements.
Walk through how to use the NGC catalog with these video tutorials.
Watch all the top NGC sessions on demand.
Learn how to use the NGC catalog with these step-by-step instructions.
The NGC catalog provides a comprehensive collection of GPU-optimized containers for AI, machine learning, and HPC that are tested and ready to run on supported NVIDIA GPUs on premises, in the cloud, or at the edge. In addition, the catalog provides pretrained models, model scripts, and industry solutions that can be easily integrated into existing workflows.
Compiling and deploying deep learning frameworks can be time consuming and prone to errors. Optimizing AI software requires expertise. Building models requires expertise, time, and compute resources. The NGC catalog takes care of these challenges with GPU-optimized software and tools that data scientists, developers, IT, and users can leverage so they can focus on building their solutions.
Each container has a pre-integrated set of GPU-accelerated software. The stack includes the chosen application or framework, NVIDIA CUDA® Toolkit, accelerated libraries, and other necessary drivers—all tested and tuned to work together immediately with no additional setup.
The NGC catalog features the top AI software, including TensorFlow, PyTorch, MxNet, NVIDIA TensorRT, RAPIDS™, and many more. Browse the NGC catalog to see the full list.
The NGC catalog containers run on PCs, workstations, HPC clusters, NVIDIA DGX systems, on NVIDIA GPUs on supported cloud providers, and NVIDIA-Certified Systems. The containers run in Docker and Singularity runtimes. View the NGC documentation for more information.
NVIDIA offers virtual machine image files in the marketplace section of each supported cloud service provider. To run an NGC container, simply pick the appropriate instance type, run the NGC image, and pull the container into it from the NGC catalog. The exact steps vary by cloud provider, but you can find step-by-step instructions in the NGC documentation.
The most popular deep learning software such as TensorFlow, PyTorch, and MXNet are updated monthly by NVIDIA engineers to optimize the complete software stack and get the most from your NVIDIA GPUs.
There’s no charge to download the containers from the NGC catalog (subject to the terms of use). However, for running in the cloud, each cloud service provider will have their own pricing for GPU compute instances.
No, it’s a portal to deliver GPU-optimized software, enterprise services, and software.
The NGC Private Registry was developed to provide users with a secure space to store and share custom containers, models, model scripts, and Helm charts within their enterprise. The Private Registry allows them to protect their IP while increasing collaboration.
Users get access to the NVIDIA Developer Forum, supported by a large community of AI and GPU experts from the NVIDIA customer, partner, and employee ecosystem. NVIDIA Enterprise Support is available with NVIDIA AI Enterprise licenses and provides direct access to NVIDIA experts, control of your upgrade and maintenance schedules with long-term support options, and access to training and knowledge-base resources.
In addition, NGC Support Services provides L1-L3 support on NVIDIA-Certified Systems, available through our OEM partners.
NVIDIA-Certified Systems, consisting of NVIDIA EGX™ and HGX™ platforms, enable enterprises to confidently choose performance-optimized hardware and software solutions that securely and optimally run their AI workloads—both in smaller configurations and at scale. See full list of NVIDIA-Certified Systems.