Tried to install NVIDIA Isaac Sim to run some local reinforcement learning experiments, but it turns out my (expensive at the time) TITAN V GPU is not compatible. Isaac Sim requires an RTX-branded GPU (which offer hardware-accelerated ray tracing). Fun fact: Even thee latest NVIDIA H100 data center GPUs don't support ray tracing - their flagship GPUs are optimized for AI training + inference. For simulation workloads, you need something like an an L4, or one of the desktop-class GeForce (gaming) or A6000 (pro workstation) GPUs. NVIDIA refer to this as the "3 GPU architecture" (simulation, training/inference, and on-robot). So I've purchased a GeForce RTX 4090, and damn this thing is huge. "Titan" V for scale. Barely fits in my desktop case, and turns out I need to order some more power cables off Amazon before I can even plug it in. More of my RL journey to come.
You might need CPU and PSU upgrades to avoid bottlenecks, as well. Also, RTX 4000 series are GDDR7. Though unlike ram, we can fit all GPUs in all mobos, given the case can fit them, performance can take a hit because of the mobo being an older version.
I really liked IsaacSim but sometimes I felt the simulator was made only for selling their GPUs. I think it can be optimized in many ways. I also did a laptop upgrade + RAM upgrade for the smooth working of the Sim. I dont remember i have done this much upgrade for any other software in my life 😐
I still remember the shock when I upgraded from a GTX 980 to a 40 series. I was told cards are huge nowadays, but when seeing it in person it's hard to wrap your head around it at first. If they keep going at this rate, ATX cases may start to have trouble containing them.
The banana for scale drives home the size.
Awesome!! I am starting my journey with RL and Robotic Transformers this week too! Hope to learn a lot from your updates :) PS: When will FoxGlove start hiring anyone below the staff level? I'm highly interested
Once you put all of it together, consider using the container setup. This way you will get used to scaling up the learning once you are ready to deploy in the cloud (AWS, and recently Azure added Omniverse support. GCP is not). From my personal experience, I usually train the policies in other frameworks. My favorite is Brax, a fully differentiable physics engine that runs on top of JAX. Since it's JAX, you can train a complex policy in Google Colab (so you can start for free and get very very far). However, if you are starting, since Brax is part of MuJoCo 3.0 (aka MJX), MJX would be my recommendation (and can also start in Google Colab or locally). Then, once you trained the policy, you can run it in Omniverse, take advantage of all the assets and scenes available there, and build beautiful rendering scenes for sales purposes.
If you choose the right motherboard and 4090 adapter, there is a way to squeeze 3 in there !!
I got to assemble my own desktop last year, a lifelong dream fulfilled at last - thanks to NVIDIA Isaac sim.
Having a similar experience right now :-)
Inventing | Insurance Innovation | Technology Strategy
3moThey just need to add more VRAM to the 4090, well maybe the 5090 when that comes out, as I keep hitting the 24GB memory limit. Would be nice if you could upgrade the memory on the consumer GPUs for AI......