📄️ Image overview
For information on container images, please refer to Container image overview.
📄️ TensorFlow
TWSC provides a total of 20 ready-to-use working environments of NGC optimized TensorFlow. TensorFlow is an open source library that uses data flow graphs to represent abstract numerical computing processes. The nodes in the graph represent mathematical operations, while the edges represent multidimensional data arrays (tensors) that flow between them. This flexible architecture makes it easy to perform computations on different devices without rewriting code, such as PCs with one or more CPUs/GPUs, smart mobile devices, etc.
📄️ PyTorch
TWSC provides 10 ready-to-use working environments of NGC optimized PyTorch. PyTorch is a GPU accelerated tensor computational framework with a Python front end. Functionality can be easily extended with common Python libraries such as NumPy, SciPy and Cython. Automatic differentiation is done with a tape-based system at both a functional and neural network layer level. This functionality brings a high level of flexibility and speed for deep learning frameworks and offers NumPy-like acceleration. PyTorch also includes standard defined neural network layers, deep learning optimizers, data loading utilities, and multi-gpu and multi-node support. Functions are executed immediately instead of enqueued in a static graph, improving ease of use and providing a great debugging experience.
📄️ CUDA
TWSC provides ready-to-use working environment of NGC’s CUDA. CUDA® is a parallel computing platform and application programming interface (API) model created by Nvidia. It allows software developers and software engineers to use CUDA-enabled graphics processing units (GPUs) for general purpose processing. Its platform is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources.
📄️ Matlab (BYOL)
TWSC provides pay-as-you-go working environment of NGC optimized MATLAB. MATLAB® is a mathmatical computing software that allows matrix manipulations, plotting of functions and data. Moreover, you can interface MATLAB with other languages. To activate the service on the cloud, please enter your MathWorks account and password associated with a valid license.
📄️ Caffe
TWSC provides ready-to-use working environment of NGC’s NVCaffe. The NVIDIA Deep Learning SDK accelerates widely-used deep learning frameworks such as NVCaffe™. Caffe™ was originally developed by the Berkeley Vision and Learning Center (BVLC) and other community contributors. It is a deep learning framework made with expression, speed, and modularity in mind.
📄️ CNTK
TWSC provides ready-to-use working environment of NGC optimized Cognitive Toolkit™. The NVIDIA Deep Learning SDK accelerates widely-used deep learning frameworks such as the Microsoft® Cognitive Toolkit™, formerly referred to as CNTK. Microsoft Cognitive Toolkit empowers you to harness the intelligence within massive datasets through deep learning by providing uncompromised ans commerciral-grade quality (scaling, speed and accuracy) and the compatibility with the programming languages and algorithms you already work with.
📄️ MXNet
TWSC provides ready-to-use working environment of NGC optimized MXNet. MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavors of symbolic programming and imperative programming to maximize efficiency and productivity. In its core is a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. The library is portable, lightweight, and scalable to multiple GPUs and multiple machines.
📄️ Caffe2
TWSC provides ready-to-use working environment of NGC optimized Caffe2. The NVIDIA Deep Learning SDK accelerates widely-used deep learning frameworks such as Caffe2™. Caffe2 is a deep-learning framework designed to easily express all model types such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), etc. through Python API, and execute them using a highly efficiently C++ and CUDA® backend.
📄️ TensorRT
TWSC provides ready-to-use working environment of NGC’s TensorRT. NVIDIA TensorRT is a C++ library that facilitates high performance inference on NVIDIA GPUs. It is designed to work in connection with deep learning frameworks that are commonly used for training. TensorRT focuses specifically on running an already trained network quickly and efficiently on a GPU for the purpose of generating a result; also known as inferencing.
📄️ Triton Inference Server
TWSC provides pay-as-you-go working environment of NGC’s TensorRT Inference Server. The TensorRT inference server provides an inference service via an HTTP endpoint, allowing remote clients to request inferencing for any model that is being managed by the server. The TensorRT inference server itself is included in the TensorRT inference server container. External to the container, there are additional C++ and Python client libraries, and additional documentation at GitHub: Inference Server.
📄️ Theano
TWSC provides pay-as-you-go working environment of NGC optmized Theano. Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
📄️ Torch
TWSC provides pay-as-you-go working environment of NGC optmized Torch. Torch is a scientific computing framework with wide support for deep learning algorithms. Thanks to an easy and fast scripting language, Lua, and an underlying C/CUDA® implementation, Torch is easy to use and is efficient. Torch offers popular neural network and optimization libraries that are easy to use yet provide maximum flexibility to build complex neural network topologies.
📄️ DIGITS
TWSC provides pay-as-you-go working environment of NGC’s DIGITS. The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning into the hands of engineers and data scientists. DIGITS can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks. DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment. DIGITS is completely interactive so that data scientists can focus on designing and training networks rather than programming and debugging.
📄️ NeMo
TWSC provides pay-as-you-go working environment of NGC NeMo. It would be easy to build new state of the art speech and NLP networks easily through API compatible building blocks that can be connected together, and use PyTorch Lightning for easy and performant multi-GPU/multi-node mixed-precision training.
📄️ RAPIDS
TWSC provides pay-as-you-go working environment of NGC's RAPIDS. RAPIDS is a GPU-acceleration platform built on CUDA that leverages the power of the GPUs and is designed for data science and machine learning to let scientists rapidly gain knowledge from ever-growing datasets. This framework gives you the ability to execute end-to-end data pipeline, including data preparation, model training, and visualization.
📄️ Clara Train SDK
TWSC provides pay-as-you-go working environment of Nvidias Clara Train SDK. Clara Train SDK is a domain optimized developer application framework that includes APIs for AI-Assisted Annotation. This enables any medical viewer to be AI capable and enables a TensorFlow based training framework with pre-trained models to start AI development with techniques such as Transfer Learning, Federated Learning, and AutoML.
📄️ CUDA GL
TWSC provides pay-as-you-go working environment of NGC CUDA GL container, which extends the CUDA images by adding support for OpenGL through libglvnd. These images are provided for use as a base layer upon which to build your own GPU-accelerated application container image.
📄️ Morpheus
TWSC provides pay-as-you-go working environment of NGC Morpheus container, which allows teams to build their own optimized pipelines that address cybersecurity and information security use cases. Morpheus provides development capabilities around dynamic protection, real-time telemetry, adaptive policies, and cyber defenses for detecting and remediating cybersecurity threats.
📄️ Merlin Training
TWSC provides pay-as-you-go working environment of NGC Merlin Training container. It is a framework for accelerating the entire recommender systems pipeline on the GPU: from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges.
📄️ Merlin Inference
TWSC provides pay-as-you-go working environment of NGC Merlin Inference container. It is a framework for accelerating the entire recommender systems pipeline on the GPU: from data ingestion and training to deployment. Merlin empowers data scientists, machine learning engineers, and researchers to build high-performing recommenders at scale. Merlin includes tools that democratize building deep learning recommenders by addressing common ETL, training, and inference challenges.
📄️ Maxine Audio Effect SDK
TWSC provides pay-as-you-go working environment of NGC Maxine container. It comes with two features, Noise Reduction (NR) and Room Echo Removal (RER) which use staate-of-the-art AI models. NR removes several common background noises while preserving the speaker's natural voice. RER removes reverberations from audio and restore clarity of a speaker's voice. There is also a model that combines both features into one.
📄️ HPC SDK
TWSC provides pay-as-you-go working environment of NGC HPC SDK. The compilers support GPU acceleration of applications with standard C++ and Fortran, OpenACC® directives, and CUDA®. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming. Performance profiling and debugging tools simplify porting and optimization of HPC applications.
📄️ TAO Toolkit for Computer Vision
TWSC provides pay-as-you-go working environment of NGC TAO Toolkit for Computer Vision. It could take purpose-built pre-trained AI models and customizing them with your own data. TAO adapts popular network architectures and backbones to your data, allowing you to train, fine tune, prune and export highly optimized and accurate AI models for edge deployment.
📄️ Modulus
TWSC provides pay-as-you-go working environment of NGC Modulus. It is a deep learning framework that blends the power of physics and partial differential equations (PDEs) with AI to build more robust models for better analysis.
📄️ Clara Parabricks
TTWSC provides pay-as-you-go working environment of NGC Clara Parabricks, which supports applications across the genomics industry, primarily supporting analytical workflows for DNA, RNA, and somatic mutation detection applications. The results are easy to verify and combine with other publicly available datasets.
📄️ Meta-CodeLlama2-7B, 13B, 34B-8K
CodeLlama model is a language model for code development based on the commercially available LLM Llama 2 of Meta. CodeLlama supports programming languages as following: Python, C++, Java, PHP, Typescript, C# and Bash. You agree to comply the license released by Meta while using. The End User License Agreement and Acceptable Use Policy.