Posts

Showing posts from April, 2025

How not to load a local AI in more steps than you would think possible

Image
First Post! How to Make as Many Mistakes as Possible When Deploying Your First Local AI LLM Model (No Matter How Easy It May Seem) Start with Research: I researched which GPU to buy and found a great deal on a used model from Amazon. What I didn’t do was check whether it would fit in my Cisco server or whether it was supported for GPU passthrough by Nutanix. I spent two days trying to install it—only to discover it didn’t fit and wasn’t supported. Classic. Find a Supported GPU: I eventually found a compatible GPU that fit and was supported by Nutanix. Unfortunately, I forgot to order the correct power supply cable. Once installed, the GPU drew so much power that the server wouldn't even boot. The saga continues. Upgrade the Power Supply: I decided to go big and ordered 1600W power supplies. What I didn’t realize? They required a 220V power outlet—which I didn’t have. Back to square one. Try a Cable Fix: Thinking maybe the cable was the issue, I ordered a new one. Turns out, it w...