Categories

Audio
CPU
General
Graphics Card
Maintenance
Monitor
RAM (Memory)
Sustainability
Use Cases

About Evatech

Evatech Computers is a 100% Australian owned & operated custom PC provider, specialising in gaming, workstation, and home office PCs tailored and built to order to suit clients' exact needs and budgets.

Shop

Custom Gaming PCs
Custom Workstations
Pre-built PCs
Monitors
Mice
Keyboards
Headsets & Microphones

Configuring a PC for Generative AI (Image & Video)

Published 24th Dec 2024 - 5 minute read

For generative models, the CPU typically shouldn't play a large role, but if your other work relies more heavily on the CPU then it could still be an important consideration. If your workflow involves data collection, manipulation, or pre-processing, the CPU will be a critical component to select carefully. Lastly, the choice of platform will dictate other factors like maximum memory capacity, PCI-e lane count, I/O connectivity, and future upgrade paths.

CPU (Processor)

Due to the emphasis on data movement & transformation in data science, able to take advantage of multi-core parallelism for instance, CPUs are well-suited to such workflows as opposed to GPU compute in ML/DL (machine learning/deep learning).

Which CPU to pick for creating AI-generated images & videos?

The choice of platform or specific CPU doesn't appear to make any impact on the speed of generation. All modern CPUs are more than capable of supporting modern graphics cards which is where the heavy lifting is done. If you want to employ multiple GPUs to run multiple models at once, a CPU/platform with more PCI-e lanes like Threadripper will be better suited than a consumer option – otherwise consumer options are dramatically more affordable.

We would still recommend at minimum an Intel Ultra 5, Intel i5, or AMD R5 CPU, with a U7/i7/R7 or above as a more comfortable choice, especially for whatever the future may hold for you & your system.

Do more CPU cores speed up generative AI workflows?

With the bulk of the work falling on the GPU(s), the CPU doesn't have an impact. To reiterate an earlier point though, any other tasks the PC will be completing should also be considered to ensure you have a well-rounded system for all work you'll be throwing at it.

Intel or AMD CPU?

Consumer-grade generative AI applications don't seem to distinguish between AMD or Intel CPUs. There may be software optimisations for some niche applications that may have them prefer Intel or AMD, however.


RAM (Memory)

RAM performance and capacity requirements are dependent on the tasks being run but can be a very important consideration and there are some minimal recommendations. Consider other tasks the PC might be performing too!

How much RAM does generative AI need?

For generative AI, RAM capacity doesn't really factor in to the equation, but as a general rule we'd recommend at least twice the amount of total VRAM (GPU video RAM).

For a system with Nvidia's RTX 4080 Super 16GB that would mean 32GB of system RAM, or with the RTX 4090 24GB it means 48GB (or 64GB as that's a more common increment).

32GB of RAM is about the minimum amount of RAM we recommend for most users lately, with 64GB typically being a comfortable and somewhat future-proof option. With actual workloads being thrown at the PC, and potentially multiple applications & browser tabs open, you should factor this into your decision.


GPU (Video/Graphics Card)

GPUs are the centre of generative AI workloads. Despite the output type (image, video, voice, or text) most projects are based around Nvidia's CUDA, but there is support on many projects for AMD's ROCm, too.

The factors to consider in GPU selection for generative AI are:

  • Total memory (VRAM)
  • Memory bandwidth (interface width [bits] x clock speed [MHz])
  • Floating point calculations (FP16 is most relevant)
    • Nvidia – Tensor core count & Tensor core generation (latest: 4th)
    • AMD – Compute unit count

Which GPU to pick for generative AI?

Nvidia's RTX 4080 Super which has 16GB of VRAM & the RTX 4090 which has 24GB are easy recommendations. If your projects call for more memory you can step over to Nvidia's professional grade GPUs such as the RTX 5000 Ada 32GB, or RTX 6000 Ada 48GB, but these bump up the costs considerably compared to the consumer-grade options.

How much VRAM (GPU memory) does generative AI need?

It will depend on what models you're using: below is a quick reference.

Model Version Minimum VRAM Recommended VRAM Training VRAM
SD1.5 8GB 12GB 16GB
SDXL 12GB 16GB 24GB

Will multiple GPUs improve performance in generative AI?

In short: no. To explain: multiple GPUs can enable you to speed up batch image generation, or enable multiple users to access their own GPU resources from a centralised server. Four GPUs grants you 4 images in the time it takes one GPU to generate 1 image (providing nothing else is causing a bottleneck) – but four GPUs does not generate one image four times faster than one GPU!

Nvidia or AMD GPU for generative AI?

As it stands, Nvidia graphics solutions have the edge over AMD. Nvidia's CUDA is better supported and the cards have more raw compute power; a winning combination.

Does generative AI require a "professional-grade" GPU?

No – you should only look to pro cards if the consumer-grade GPUs do not have enough VRAM to satisfy your projects.

Wraith - AMD Ryzen Custom Gaming PC [Modified]

Someone in Noble Park, VIC bought a Wraith - AMD Ryzen Custom Gaming PC [Modified]

x

4 hours ago