Instance Sizes
Choose the best resources to efficiently build and deploy models
Instance sizes allow simple selection of compute and memory resources when building and deploying models.
On this page, you will find detailed information about the different instance sizes available on JFrog ML, helping you choose the optimal instance size to suit your needs.
Note: Instance configuration for building and deploying models may still be customized individually.
Build & deploy models
General purpose instances
JFrog ML offers a wide range of instance size to build and deploy models. Our general-purpose instances provide varying levels of CPU and memory resources, allowing you to optimize efficiency and performance.
Choose the instance size that best matches your requirements from the table below:
Instance | CPUs | Memory (GB) | QPU |
---|---|---|---|
Tiny | 1 | 2 | 0.25 |
Small | 2 | 8 | 0.5 |
Medium | 4 | 16 | 1 |
Large | 8 | 32 | 2 |
XLarge | 16 | 64 | 4 |
2XLarge | 32 | 128 | 8 |
4XLarge | 64 | 256 | 16 |
8XLarge | 128 | 512 | 32 |
12XLarge | 192 | 768 | 48 |
GPU instances
Build and deploy models on GPU-based machines from the selection available in the below table:
Instance | GPU Type | GPUs | CPUs | Memory (GB) | QPU |
---|---|---|---|---|---|
gpu.a10.xl | NVIDIA A10G | 1 | 3 | 14 | 5.03 |
gpu.a10.2xl | NVIDIA A10G | 1 | 7 | 28 | 6.06 |
gpu.a10.4xl | NVIDIA A10G | 1 | 15 | 59 | 8.12 |
gpu.a10.8xl | NVIDIA A10G | 1 | 32 | 123 | 12.24 |
gpu.a10.12xl | NVIDIA A10G | 4 | 47 | 189 | 28.36 |
gpu.t4.xl | NVIDIA T4 | 1 | 3 | 14 | 2.19 |
gpu.t4.2xl | NVIDIA T4 | 1 | 7 | 28 | 3.32 |
gpu.t4.4xl | NVIDIA T4 | 1 | 15 | 59 | 5.58 |
gpu.a100.xl | NVIDIA A100 | 1 | 11 | 78 | 15.9 |
gpu.a100.8xl | NVIDIA A100 | 8 | 95 | 1072 | 163.2 |
gpu.v100.xl | NVIDIA V100 | 1 | 7 | 56 | 15.9 |
gpu.v100.4xl | NVIDIA V100 | 4 | 31 | 227 | 63.6 |
gpu.v100.8xl | NVIDIA V100 | 8 | 63 | 454 | 127.2 |
gpu.k80.xl | NVIDIA K80 | 1 | 3 | 56 | 4.6 |
gpu.k80.8xl | NVIDIA K80 | 8 | 31 | 454 | 36.8 |
gpu.k80.16xl | NVIDIA K80 | 16 | 63 | 681 | 73.8 |
gpu.l4.xl | NVIDIA L4 | 1 | 3 | 12 | 3.53 |
Feature store
Data cluster sizes
Our Feature Store offers a variety of sizes to accommodate your needs. Select the appropriate data cluster size to ensure scalability and efficiency in handling your data ingestion jobs.
Take a look at the table below to explore the available data cluster sizes:
Size | QPU | Notes |
---|---|---|
Nano | 3 | Available for Streaming features |
Small | 6 | |
Medium | 12 | |
Large | 24 | |
X-Large | 48 | |
2X-Large | 96 |
Instance sizes in qwak-sdk
qwak-sdk
Using the qwak-sdk
provides you with flexibility in choosing instance sizes for building and deploying models.
Take a look at the examples below to understand how to specify the desired instance size
Build models on CPU instances
qwak models build --model-id "example-model-id" --instance medium .
Build models on GPU instances
qwak models build --model-id "example-model-id" --instance "gpu.t4.xl" .
Deploy models on CPU instances
qwak models deploy realtime --model-id "example-model-id" --instance large
Deploy models on GPU instances
qwak models deploy realtime --model-id "example-model-id" --instance "gpu.a10.4xl"
Note: Existing resource configuration flags are supported as well:
--memory
,--cpus
,--gpu-type
,--gpu-amount
.
Instances sizes in the UI
In the JFrog ML UI, you can easily select and configure instance sizes for your models. Whether you need CPU or GPU instances, our UI offers intuitive options to choose the right size for your workload.
During the deployment process, use the dropdown to specify the instance size for optimal performance.
Setting custom configuration
JFrog ML allows you to manually set custom instance configuration sizes for building and deploying your models, regardless of the default instance type options.
Custom instance type configuration is currently available for CPU deployments only.
Updated 2 months ago