Credits usage
Depending on your computer vision workflows, the amount of computing power you need will vary. As a general rule, the more computing power your deployments require, the more credits they consume.
To help you choose the right plan, the credits usage for each compute infrastructure is shown below.
Serverless deployments
For serveless deployments (CPU only), you are charged based on the workflow execution time in seconds. You are only charged when you call the deployment endpoint. See infrastructure specifications.
Size | vCPU | RAM | Credits/second |
---|---|---|---|
S | 3 | 4GB | 0.829 |
M | 4 | 6GB | 0.836 |
L | 5 | 8GB | 0.842 |
XL | 6 | 10GB | 0.849 |
As an example of credits usage for a concrete workflow, we propose a simulation with a classical OCR workflow:
- basic pre-processing algorithms (noise reduction and luminosity correction)
- text detection with MMLAB framework
- text recognition with MMLAB framework
The table shows the number of images that can be processed with the monthly version of the plans.
Size | Execution time (s) | Credits/image | Starter plan (images/month) | Basic plan (images/month) | Pro plan (images/month) |
---|---|---|---|---|---|
S | 16 | 13.26 | 151 | 755 | 6034 |
M | 12 | 10.03 | 200 | 1000 | 8000 |
L | 9.6 | 8.08 | 248 | 1238 | 9901 |
XL | 8 | 6.79 | 295 | 1473 | 11783 |
As you can see, the cheapest serverless deployment (lowest compute power) is not always the best financial choice. Indeed, you are charged based on the execution time so the implementation of your workflow algorithms is also very important.
CPU instances
For CPU instance deployment, you are charged based on the time the instance is running (in seconds). See infrastructure specifications.
The table shows the lifetime of one deployment for the monthly version of the plans.
Size | Provider | vCPU | RAM | Credits/second | Starter plan | Basic plan | Pro plan |
---|---|---|---|---|---|---|---|
XS | AWS | 2 | 8GB | 0.00477 | ~117h (4.9 days) | ~582h (24.3 days) | ~4657h (194 days) |
XS | GCP | 2 | 8GB | 0.00497 | ~112h (4.7 days) | ~559h (23.3 days) | ~4470h (186 days) |
S | AWS | 4 | 8GB | 0.00802 | ~70h (2.9 days) | ~347h (14.4 days) | ~2770h (115 days) |
S | GCP | 4 | 16GB | 0.00904 | ~62h (2.6 days) | ~308h (12.8 days) | ~2459h (103 days) |
M | AWS | 8 | 16GB | 0.01532 | ~37h (1.5 days) | ~182h (7.6 days) | ~1451h (61 days) |
M | GCP | 8 | 32GB | 0.01718 | ~33h (1.4 days) | ~162h (6.7 days) | ~1294h (54 days) |
L | AWS | 16 | 32GB | 0.02991 | ~19h (0.8 days) | ~93h (3.9 days) | ~744h (31 days) |
L | GCP | 16 | 64GB | 0.03345 | ~17h (0.7 days) | ~83h (3.5 days) | ~665h (28 days) |
XL | AWS | 32 | 64GB | 0.05908 | ~10h (0.4 days) | ~48h (2 days) | ~377h (16 days) |
XL | GCP | 32 | 128GB | 0.06599 | ~9h (0.4 days) | ~43h (1.8 days) | ~337h (15 days) |
GPU instances
For GPU instance deployment, you are charged based on the time the instance is running (in seconds). See infrastructure specifications.
The table shows the lifetime of one deployment for the monthly version of the plans.
Size | Provider | vCPU | RAM | GPU | Credits/second | Starter plan | Basic plan | Pro plan |
---|---|---|---|---|---|---|---|---|
XS | AWS | 4 | 16GB | NVIDIA T4 16GB | 0.02132 | ~27h (1.1 days) | ~131h (5.4 days) | ~1043h (44 days) |
XS | GCP | 4 | 16GB | NVIDIA L4 24GB | 0.02687 | ~21h (0.9 days) | ~104h (4.3 days) | ~828h (35 days) |
S | AWS | 8 | 32GB | NVIDIA T4 16GB | 0.03263 | ~18h (0.7 days) | ~86h (3.6 days) | ~681h (29 days) |
S | GCP | 8 | 32GB | NVIDIA L4 24GB | 0.00904 | ~16h (0.7 days) | ~80h (3.4 days) | ~636h (27 days) |
M | AWS | 4 | 16GB | NVIDIA A10 24GB | 0.03826 | ~15h (0.6 days) | ~73h (3.1 days) | ~581h (25 days) |
M | GCP | 16 | 64GB | NVIDIA L4 24GB | 0.04662 | ~12h (0.5 days) | ~60h (2.5 days) | ~477h (20 days) |
L | AWS | 8 | 32GB | NVIDIA A10 24GB | 0.04975 | ~12h (0.5 days) | ~56h (2.4 days) | ~447h (19 days) |
L | GCP | 12 | 85GB | NVIDIA A100 40GB | 0.12586 | ~5h (0.2 days) | ~23h (0.9 days) | ~177h (8 days) |
XL | AWS | 16 | 64GB | NVIDIA A10 24GB | 0.06636 | ~9h (0.4 days) | ~42h (1.8 days) | ~335h (14 days) |
XL | GCP | 12 | 170GB | NVIDIA A100 80GB | 0.17426 | ~4h (0.2 days) | ~16h (0.7 days) | ~128h (6 days) |
What happens when credits run out
When your credits run out, you will no longer be able to deploy new workflows. You will have to wait for your next refund date depending on whether you are on a monthly or yearly plan. You can also upgrade your plan to get more credits. You will receive your new credits immediately after subscription validation (see plan page).
For active deployments, the behaviour depends on the compute infrastructure:
- Serverless: Deployments are being preserved but you are not able to send requests on them.
- CPU/GPU instances: Deployments (and the infrastructure behind them) are deleted and the endpoint URL becomes invalid. Workflows are being preserved so that you can then deploy when your account is credited. Please not that a new endpoint URL will be attached to the new deployment, even if the workflow is the same.
We will send notification emails to individual user or organization owners when credits are low. You should then make the right decision for your active deployments.