Skip to main content

FAQs | OneAI

Subscription fee

Q1. What are the fees associated with using OneAI and how are they charged?

Use of the OneAI service will incur subscription fees, as well as costs associated with using tagging tools, hosting notebooks, training models, performing inference, data storage, and data processing resources. Please refer to OneAI Subscription Fee for more information.

Q2. Why can't I subscribe OneAI?
  1. Project Restrictions. If the item expires during the month, or if the wallet balance for the item is less than 100, the item cannot be subscribed.
  2. Identity restrictions. Subscriptions can only be made by the tenant admin and will not be available to tenant users.

Please refer to OneAI Subscription policies for more information.

Q3. How do I view the itemized costs of OneAI services?

You can view OneAI's itemized fees in the Member Center. Select the item you want to view from the Member Center, select Usage from the top menu, and then select OneAI from the drop-down list to view the fees by product item:

Service features

Q1. What is Notebook Service?

OneAI Notebook Service integrates with leading deep learning frameworks(TensorFlow, PyTorch, MXNet) and suites. It supports pre-built image of data science languages(Julia, R) and data analysis engines(Spark), and it's a flexible, managed JupyterLab interactive collaborative development environment. Please refer to OneAI Notebook Service for more information.

Q2. What solutions are pre-made by OneAI AI Maker?

OneAI AI Maker provides 8 public templates for object detection, image classification, medical imaging, classification problems, regression problems, pedestrian attribute recognition, etc: YOLOv3, YOLOv4, Nvidia Clara Train 3.0, Nvidia Clara Train 4.0, Scikit-learn: regression, Scikit-learn: classification, Image-classification, PAR. Please refer to Case Study for more information.

Q3. What is the difference between OneAI AI Maker and AI Maker(Beta)?

AI Maker(Beta) feature adds MLflow integration to manage the details of model training.

  1. AI Maker(Beta) > MLflow Management can managea model lifecycle.
  2. AI Maker(Beta) > Training Job for a task the built-in template training model automatically applies MLflow to provide a more detailed AI/ML research process: using custom training codes requires manual configuration in the code. MLflow Logging Function feature provides centralized management of models through OneAI user interface.

Please refer to OneAI AI Maker(Beta) for more information.

Q4. Is it possible for non-project members to use the annotation tool?

CVAT annotation tool can be used to assign tagging jobs to non-project members. You must provide access for your own non-project members:

  1. The entry point for the CVAT annotation tool is shown in the figure below:
  2. For account and password settings of CVAT annotation tool, please refer to the relevant settings in the operation guide.

Container usage

Q1. What is the scope of OneAI Container Service Port?

OneAI Container Services offers a range of static ports from 30000-32767。please refer to OneAI Container Services > Network Setting for more information.

Q2. How should the OneAI container image be generated?

Prepare your container image and use the Docker CLI to push the container image file to OneAI container image. Docker CLI information can be found in the official Docker documentation.

Q3. Can OneAI Container Service use SSH connection?

Depending on the image, the source will restrict SSH connections. The system's built-in nvidia-official-images public image is available through an SSH connection. If you want to use SSH connection, it is recommended that you install the sshd related packages in the image. OneAI Container Service uses SSH connection, please refer to the user manual for more information.

Q4. Why does the size of the container image get smaller when I upload from my local environment?

The OneAI container image will compress the container image file you uploaded, resulting in a smaller container image file with no effect on its contents.

Data storage

Q1. What kind of data storage services are available for OneAI?

OneAI uses the OneAI Storage Service as a data storage and management tool, providing secure and reliable storage compatible with Amazon S3 and supporting third-party tools(S3 browser)that enable data sharing between OneAI's services or with other project members.

Q2. How much storage is installed on the OneAI Container Service?

The storage space for creating containers is based on the size of the storage body mounted by the OneAI Storage Service.

Q3. What kind of data can OneAI Storage Service store?

Data can be stored in any format and of any type.

Q4. What is the maximum amount of space and files that can be stored in the OneAI storage service?

There is no usage limit on the total amount of data and objects that can be stored by the OneAI Storage Service.

Training model

Q1. Does OneAI support multi-GPU training?

OneAI AI Maker public templates automatically distribute deep learning models and large training sets across multiple GPUs, while custom training code needs to be manually tuned to invoke GPUs, and the invocation method will vary depending on the deep learning framework.

Q2. Which models can be tuned with the OneAI AI Maker SmartML training task?

SmartML Training Job has 4 types of algorithms to choose from:Bayesian, TPE, Grid, Random to perform optimization strategies for model training. If you don't use public template, you must use os.environ in your training code to Manually set hyperparameters to set tunable hyperparameters, model types, etc. For more information, please refer to Public template image-classification case study setup.

Q3. What model types can be imported into OneAI AI Maker models?

OneAI models can store unlimited types of models. Before importing, you need to package the model as a ZIP file and upload it to OneAI storage service. Please refer to AI Maker models for more information.

Q4. How can I confirm that GPU resources are being used?
  • The compute resources used by OneAI Notebook Service, Container Service and Inference Service can be monitored through OneAI Resources.
  • For OneAI training assignments, you can check the status of your computing resources within 7 days, please contact customer service and get monitoring data.
Q5. How can I download the trained model to my local environment?
  • AI Maker: Please refer to this document for downloading instructions.
  • AI Maker(Beta): The model will be stored in OneAI storage. You can download it from the directory where the model is stored.