Instance Sizes

Choose the best resources to efficiently build and deploy models

Instance sizes allow simple selection of compute and memory resources when building and deploying models.

On this page, you will find detailed information about the different instance sizes available on JFrog ML, helping you choose the optimal instance size to suit your needs.

📘

Note: Instance configuration for building and deploying models may still be customized individually.

Select an instance size from a wide variety of options

Select an instance size from a wide variety of options

Build & deploy models

General purpose instances

JFrog ML offers a wide range of instance size to build and deploy models. Our general-purpose instances provide varying levels of CPU and memory resources, allowing you to optimize efficiency and performance.

Choose the instance size that best matches your requirements from the table below:

InstanceCPUsMemory (GB)QPU
Tiny120.25
Small280.5
Medium4161
Large8322
XLarge16644
2XLarge321288
4XLarge6425616
8XLarge12851232
12XLarge19276848

GPU instances

Build and deploy models on GPU-based machines from the selection available in the below table:

InstanceGPU TypeGPUsCPUsMemory (GB)QPU
gpu.a10.xlNVIDIA A10G13145.03
gpu.a10.2xlNVIDIA A10G17286.06
gpu.a10.4xlNVIDIA A10G115598.12
gpu.a10.8xlNVIDIA A10G13212312.24
gpu.a10.12xlNVIDIA A10G44718928.36
gpu.t4.xlNVIDIA T413142.19
gpu.t4.2xlNVIDIA T417283.32
gpu.t4.4xlNVIDIA T4115595.58
gpu.a100.xlNVIDIA A1001117815.9
gpu.a100.8xlNVIDIA A1008951072163.2
gpu.v100.xlNVIDIA V100175615.9
gpu.v100.4xlNVIDIA V10043122763.6
gpu.v100.8xlNVIDIA V100863454127.2
gpu.k80.xlNVIDIA K8013564.6
gpu.k80.8xlNVIDIA K8083145436.8
gpu.k80.16xlNVIDIA K80166368173.8
gpu.l4.xlNVIDIA L413123.53

Feature store

Data cluster sizes

Our Feature Store offers a variety of sizes to accommodate your needs. Select the appropriate data cluster size to ensure scalability and efficiency in handling your data ingestion jobs.

Take a look at the table below to explore the available data cluster sizes:

SizeQPUNotes
Nano3Available for Streaming features
Small6
Medium12
Large24
X-Large48
2X-Large96

Instance sizes in qwak-sdk

Using the qwak-sdk provides you with flexibility in choosing instance sizes for building and deploying models.

Take a look at the examples below to understand how to specify the desired instance size

Build models on CPU instances

qwak models build --model-id "example-model-id" --instance medium .

Build models on GPU instances

qwak models build --model-id "example-model-id" --instance "gpu.t4.xl" .

Deploy models on CPU instances

qwak models deploy realtime --model-id "example-model-id" --instance large

Deploy models on GPU instances

qwak models deploy realtime --model-id "example-model-id" --instance "gpu.a10.4xl"

📘

Note: Existing resource configuration flags are supported as well: --memory, --cpus, --gpu-type, --gpu-amount.

Instances sizes in the UI

In the JFrog ML UI, you can easily select and configure instance sizes for your models. Whether you need CPU or GPU instances, our UI offers intuitive options to choose the right size for your workload.

During the deployment process, use the dropdown to specify the instance size for optimal performance.

The instance size dropdown offers a wide selection of available instances

The instance size dropdown offers a wide selection of available instances

Setting custom configuration

JFrog ML allows you to manually set custom instance configuration sizes for building and deploying your models, regardless of the default instance type options.

Custom instance type configuration is currently available for CPU deployments only.

Set custom instance configuration for CPU deployments

Set custom instance configuration for CPU deployments