NVIDIA NIM FAQ
1. What is NIM access via the NVIDIA Developer Program?
Members of the NVIDIA Developer Program have access to self-hosting NVIDIA NIM for research, application development, and experimentation on up to 16 GPUs on any infrastructure—cloud, data center, or personal workstation. Access to NIM microservices is available through the duration of program membership.
2. How can developers register for the NVIDIA Developer Program and access NIM microservices?
Developers can join the free NVIDIA Developer Program and access NIM at any time via the NVIDIA API Catalog. Users looking for enterprise-grade security, support, and API stability can select the option to access NIM via free 90-day NVIDIA AI Enterprise Trial with a business email.
3. What does NIM access via the NVIDIA Developer Program include?
- Ability to download and self-host NIM microservices on your own infrastructure
- License to use your self-hosted NIM for research, application development, and experimentation on up to 16 GPUs
- Community support via the NVIDIA Developer Forum
4. How does access to NIM via NVIDIA Developer Program compare to the 90-Day NVIDIA AI Enterprise Trial?
The NVIDIA AI Enterprise 90-Day Trial is designed for production deployments and includes a license for commercial use, enterprise-grade security, support, and API stability. NIM access through the NVIDIA Developer Program is for research, development and testing purposes only and does not include enterprise features or support.
Offering | NVIDIA AI Enterprise 90-Day License | NIM via NVIDIA Developer Program |
---|---|---|
Target Audience | Organizations looking to move an application from development / testing to production (NVIDIA AI Enterprise required). | Developers interested in getting access to NIM microservices for research, development, testing, learning, etc. |
Software | All of NVIDIA AI Enterprise (i.e. access to NGC enterprise catalog) | NIM containers only(does not provide access to all of NVIDIA AI Enterprise Essentials) |
License | Production use | Research, development, and test use only |
Term | 90 days | Access for duration of Developer Program membership |
Unit of measure | GPU | GPU |
# of GPUs | Unlimited | Up to 16 GPUs. Additional upon request. |
Support | NVIDIA Enterprise Support | NVIDIA Developer Community Forum |
NIM releases and ongoing updates | Yes | Yes |
5. What does “production” mean?
Production use involves any use of NIM for purposes other than development, testing, research or evaluation such as conducting business transactions and any non-testing activity including activity serving real end-users. Using NIM in production requires an NVIDIA AI Enterprise license.
6. What do I need to deploy NIM microservices into production?
NIM microservices can be deployed into production by purchasing an NVIDIA AI Enterprise subscription, which is available from NVIDIA, reseller partners and cloud marketplaces. NVIDIA AI Enterprise includes NVIDIA Enterprise Support with optional 24x7 Business Critical support. NVIDIA AI Enterprise is available as a free 90-Day license.
7. What self-support resources are available for NIM through the NVIDIA Developer Program?
Developers can access documentation, tutorials, knowledge base articles and forum discussions from the NIM for Developers Page NIM for Developers | NVIDIA Developer.
8. Is NIM free access via the NVIDIA Developer Program available for organizations, businesses, and enterprises?
NIM access through the NVIDIA Developer Program is designed for any individual who wants to use NVIDIA NIM for development, research and testing.
For production use of NIM and access to NVIDIA AI software for the complete AI development lifecycle, organizations can purchase NVIDIA AI Enterprise to leverage enterprise-grade software with dedicated feature branches, rigorous validation processes, and support including direct access to NVIDIA AI experts and defined service-level agreements.
9. Can I run NIM microservices on any GPU?
Yes, any supported NVIDIA GPUs for NIM microservices, up to 16. If you’d like to request a license for development and testing on more than 16 GPUs, please contact us.
10. Are NIM microservices supported on Windows and Linux?
NIM is supported on Linux. NIM microservices are not currently tested on Windows or WSL (Windows Subsystem for Linux).
11. Where do I find my API key?
Log into API Catalog, find the model you want to use, and click “Get API Key.” This key is used to both authenticate with the docker registry to pull the NIM container (whether in the brev environment or your own cloud/local environment) and/or allow you to make API calls to the model endpoint hosted on the API catalog.
12. How can I get more API credits?
The NVIDIA API catalog is a trial experience of NVIDIA NIM limited to 5000 free API credits. Upon sign-up, users are granted 1000 API credits. To obtain more, click on your profile from within the API catalog → ‘Request More’. If you signed up to use the API catalog with a personal email address, you will be asked to provide a business email to activate a free 90-day NVIDIA AI Enterprise license and unlock additional 4000 credits. To continue using NIM after you’ve used up your credits, you have these options:
-
Self-host the API on your cloud provider or on-prem. Research and test use is free under the ‘NVIDIA Developer Program’ access. Please note that your organization must have an NVIDIA AI Enterprise license for production use.
-
Use serverless NIM API on Hugging Face with per-pay-use pricing. The NVIDIA AI Enterprise license is included with this option so you don’t need a separate license.
13. Why are my requests on build.nvidia.com taking so long?
The NVIDIA API catalog offers a no-cost trial experience of NVIDIA NIM, and you may experience extended wait times during periods of high load. To ensure consistent performance, we recommend the following options:
-
Self-host the API on your cloud provider or on-prem. Research and test use is free under the ‘NVIDIA Developer Program’ access. Please note that your organization must have an NVIDIA AI Enterprise license for production use.
-
Use serverless NIM API on Hugging Face with per-pay-use pricing. The NVIDIA AI Enterprise license is included with this option so you don’t need a separate license.
14. How many nodes are supported in the NIM on GKE instance?
There are different limitations on the number of nodes available on GKE clusters & node pools. Please check this limits per GKE cluster for your requirements.
15. What GPU configurations are supported for NIM deployment on GKE?
GKE supports various NVIDIA GPU instances like H100, A100 and L4 etc. For NIM deployment on GKE, refer to the hardware support matrix.
16. Will customers be able to use MIG capabilities?
NIM is not supported on MIG instances yet.
17. What are the ownership responsibilities between GKE, NVIDIA, and the customer, in terms of infrastructure and application data?
GKE provides a managed Kubernetes platform to run and deploy workloads at scale. It handles the infrastructure level details like cluster creation, node provisioning, installing lower level dependencies like drivers etc. NVIDIA NIM is a microservice that runs on top of the GPU cluster. You can deploy it to get the GenAI model API endpoint up quickly.
18. How many replicas are supported for AI/ML workloads on this GKE setup?
You can create as many replicas of the model as your cluster supports.
19. What monitoring and observability tools are included in the installed stack to monitor AI/ML workloads?
NIM exports Prometheus metrics for exporting metrics like number of requests pending in the queue, request time, etc. You can find the complete list of tools and metrics on the Prometheus page.
20. Can customers modify the monitoring stack, install third-party tools, or export metrics outside of GKE environment?
Yes, customers can install third-party tools and export metrics as per their requirements. NIM does not restrict customers in monitoring and exporting metrics.
21. Are data security, model security, and compliance with regulations such as GDPR and HIPAA being properly upheld?
NIM does not store any customer data and it does not have a phone home device installed in it. The customer owns the model as well and can choose to deploy it on their preferred infrastructure.