Skip to main content

Container images

📄️ TensorFlow

TWSC provides a total of 20 ready-to-use working environments of NGC optimized TensorFlow. TensorFlow is an open source library that uses data flow graphs to represent abstract numerical computing processes. The nodes in the graph represent mathematical operations, while the edges represent multidimensional data arrays (tensors) that flow between them. This flexible architecture makes it easy to perform computations on different devices without rewriting code, such as PCs with one or more CPUs/GPUs, smart mobile devices, etc.

📄️ PyTorch

TWSC provides 10 ready-to-use working environments of NGC optimized PyTorch. PyTorch is a GPU accelerated tensor computational framework with a Python front end. Functionality can be easily extended with common Python libraries such as NumPy, SciPy and Cython. Automatic differentiation is done with a tape-based system at both a functional and neural network layer level. This functionality brings a high level of flexibility and speed for deep learning frameworks and offers NumPy-like acceleration. PyTorch also includes standard defined neural network layers, deep learning optimizers, data loading utilities, and multi-gpu and multi-node support. Functions are executed immediately instead of enqueued in a static graph, improving ease of use and providing a great debugging experience.

📄️ CUDA

TWSC provides ready-to-use working environment of NGC’s CUDA. CUDA® is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use CUDA-enabled graphics processing units (GPUs) for general purpose processing. Its platform is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources.

📄️ CNTK

TWSC provides ready-to-use working environment of NGC optimized Cognitive Toolkit™. The NVIDIA Deep Learning SDK accelerates widely-used deep learning frameworks such as the Microsoft® Cognitive Toolkit™, formerly referred to as CNTK. Microsoft Cognitive Toolkit empowers you to harness the intelligence within massive datasets through deep learning by providing uncompromised ans commerciral-grade quality (scaling, speed and accuracy) and the compatibility with the programming languages and algorithms you already work with.

📄️ MXNet

TWSC provides ready-to-use working environment of NGC optimized MXNet. MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity. In its core is a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable, lightweight, and scalable to multiple GPUs and multiple machines.

📄️ Triton Inference Server

TWSC provides pay-as-you-go working environment of NGC’s TensorRT Inference Server. The TensorRT inference server provides an inference service via an HTTP endpoint, allowing remote clients to request inferencing for any model that is being managed by the server. The TensorRT inference server itself is included in the TensorRT inference server container. External to the container, there are additional C++ and Python client libraries, and additional documentation at GitHub: Inference Server.

📄️ DIGITS

TWSC provides pay-as-you-go working environment of NGC’s DIGITS. The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning into the hands of engineers and data scientists. DIGITS can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks. DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.

📄️ Merlin Training

TWSC provides pay-as-you-go working environment of NGC Merlin Training container. It is a framework for accelerating the entire recommender systems pipeline on the GPU: from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges.

📄️ Merlin Inference

TWSC provides pay-as-you-go working environment of NGC Merlin Inference container. It is a framework for accelerating the entire recommender systems pipeline on the GPU: from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges.