✨ NEW YEAR ✨ $225 off any CUSTOM or PREBUILT PC ➕ FREE EPOS H3 Gaming Headset worth $149 | That's $374 in value! 💸
with promo code NY2025 entered before checkout | Hurry first 100 redemptions only!
Categories
AudioAbout Evatech
Evatech Computers is a 100% Australian owned & operated custom PC provider, specialising in gaming, workstation, and home office PCs tailored and built to order to suit clients' exact needs and budgets.
Shop
Custom Gaming PCs
Custom Workstations
Pre-built PCs
Monitors
Mice
Keyboards
Headsets & Microphones
For generative models, the CPU typically shouldn't play a large role, but if your other work relies more heavily on the CPU then it could still be an important consideration. If your workflow involves data collection, manipulation, or pre-processing, the CPU will be a critical component to select carefully. Lastly, the choice of platform will dictate other factors like maximum memory capacity, PCI-e lane count, I/O connectivity, and future upgrade paths.
Due to the emphasis on data movement & transformation in data science, able to take advantage of multi-core parallelism for instance, CPUs are well-suited to such workflows as opposed to GPU compute in ML/DL (machine learning/deep learning).
The choice of platform or specific CPU doesn't appear to make any impact on the speed of generation. All modern CPUs are more than capable of supporting modern graphics cards which is where the heavy lifting is done. If you want to employ multiple GPUs to run multiple models at once, a CPU/platform with more PCI-e lanes like Threadripper will be better suited than a consumer option – otherwise consumer options are dramatically more affordable.
We would still recommend at minimum an Intel Ultra 5, Intel i5, or AMD R5 CPU, with a U7/i7/R7 or above as a more comfortable choice, especially for whatever the future may hold for you & your system.
With the bulk of the work falling on the GPU(s), the CPU doesn't have an impact. To reiterate an earlier point though, any other tasks the PC will be completing should also be considered to ensure you have a well-rounded system for all work you'll be throwing at it.
Consumer-grade generative AI applications don't seem to distinguish between AMD or Intel CPUs. There may be software optimisations for some niche applications that may have them prefer Intel or AMD, however.
RAM performance and capacity requirements are dependent on the tasks being run but can be a very important consideration and there are some minimal recommendations. Consider other tasks the PC might be performing too!
For generative AI, RAM capacity doesn't really factor in to the equation, but as a general rule we'd recommend at least twice the amount of total VRAM (GPU video RAM).
For a system with Nvidia's RTX 4080 Super 16GB that would mean 32GB of system RAM, or with the RTX 4090 24GB it means 48GB (or 64GB as that's a more common increment).
32GB of RAM is about the minimum amount of RAM we recommend for most users lately, with 64GB typically being a comfortable and somewhat future-proof option. With actual workloads being thrown at the PC, and potentially multiple applications & browser tabs open, you should factor this into your decision.
GPUs are the centre of generative AI workloads. Despite the output type (image, video, voice, or text) most projects are based around Nvidia's CUDA, but there is support on many projects for AMD's ROCm, too.
The factors to consider in GPU selection for generative AI are:
Nvidia's RTX 4080 Super which has 16GB of VRAM & the RTX 4090 which has 24GB are easy recommendations. If your projects call for more memory you can step over to Nvidia's professional grade GPUs such as the RTX 5000 Ada 32GB, or RTX 6000 Ada 48GB, but these bump up the costs considerably compared to the consumer-grade options.
It will depend on what models you're using: below is a quick reference.
Model Version | Minimum VRAM | Recommended VRAM | Training VRAM |
SD1.5 | 8GB | 12GB | 16GB |
SDXL | 12GB | 16GB | 24GB |
In short: no. To explain: multiple GPUs can enable you to speed up batch image generation, or enable multiple users to access their own GPU resources from a centralised server. Four GPUs grants you 4 images in the time it takes one GPU to generate 1 image (providing nothing else is causing a bottleneck) – but four GPUs does not generate one image four times faster than one GPU!
As it stands, Nvidia graphics solutions have the edge over AMD. Nvidia's CUDA is better supported and the cards have more raw compute power; a winning combination.
No – you should only look to pro cards if the consumer-grade GPUs do not have enough VRAM to satisfy your projects.
Something still not right with your Evatech PC? We're standing by and our support team can assist you!
Contact Evatech SupportIf this page didn't solve your problem, there's many more to view, and they're all very informative.
Evatech Help Docs