GPUs were initially developed to manipulate the data in the computer’s frame buffer and present it as pixels on screen, a task that CPUs weren’t suited to. It required processors that could do lots of relatively simple calculations, but very quickly and all at the same time.
The advent of polygonal 3D graphics led to programmable pixel and vertex shaders, which in turn led to the GPGPU – or general purpose graphics processing unit – which first began to appear in the early 2000s. Basically it means that the processors in a GPU can be called upon to take some of the heavy lifting away from the CPU. They’re ideal for repetitive tasks like raytracing a 3D render or performing complex simulations.
Today’s mighty graphics cards are rated in teraflops – the ability to calculate a trillion floating-point operations per second. Some high-end cards can perform at more than 100 teraflops – but that’s not to say any GPU will speed up your work in Photoshop or make sculpting in ZBrush any faster. Choosing a GPU depends on your workflow, the apps you use and, of course, your budget. Our roundup of the best graphics cards will help you with the latter, but first let's take a closer look at exactly what type of card is best suited to your creative needs.
At the very base level, you have affordable PC laptops that use an integrated GPU – one that’s actually built into the CPU. These are often weak in terms of performance, but are energy efficient, so you can work for hours without draining the battery.
An integrated GPU will be enough to watch movies, surf the web and play casual games, but might struggle with creative projects. If you need more grunt, you should look for laptops with a discrete GPU: both Nvidia and AMD produce mobile variants of their desktop cards, which provide enough power for most creative tasks, short of 3D animation and rendering.
Having said that, the more expensive (and bulkier) laptops can be equipped with top-end GPUs, like Nvidia’s GeForce RTX 2080. You will pay a premium price for a setup like this, but if you need performance on the go, the laptop form factor is no longer a limitation. Just remember to pack a mains adapter.
When it comes to desktop setups, your choice of GPU is vast. Not only because of the large number of models and options available, but also because there’s a vibrant second-hand market. High-end gamers will upgrade regularly, and so you can often find last-gen models cheaper online. While last year’s GPU might not cut it playing the latest Modern Warfare in 4K, you might find it’s ideal for your illustration, video or animation needs. For example, Nvidia’s GeForce GTX 1080 Ti (released in 2017) is a powerhouse of a card and available for less than £400 if you shop around.
While it’s not the best idea to scrimp on budget, by the same token you don’t necessarily need to buy most expensive GPU you can find, either. It’s tempting to splurge on a sexy new card with lots VRAM – but unless you’re handling huge CAD files or doing 3D rendering, any more than 8GB is probably overkill. Similarly, why pay top dollar for one of Nvidia’s RTX GPUs if none of your apps support raytracing?
One key question is: are your programmes GPU-accelerated? If your apps employ OpenCL, they’ll work with cards from both Nvidia and AMD, but if they’re CUDA (Compute Unified Device Architecture) accelerated, you’ll need an Nvidia card.
Additional GPUs provide more power in certain workflows, but with a typical setup – outside of intensive video work or 3D rendering – a single powerful GPU is often enough; you can add more but you probably won’t see enough benefit to justify the cost. You will do need to do a bit of homework to find the right balance of power and price for the tasks you do. Also take into account how many monitors you need and the resolution you work in; a lower priced card might struggle to drive two 4K monitors and handle complex video or effects work at the same time.
Nvidia currently dominates the high end with its RTX 2080 and 2070 cards (and their powerful Ti variants), although AMD struck back this summer with the Radeon VII, its first 7nm GPU with a healthy 16GB of VRAM. The card stands toe-to-toe with the GTX 2080 in most areas, although it is a bit power-hungry.
For mid-range cards, you’re really spoiled for choice. Both vendors have a huge range of GPUs available, priced from £200 to around £500. Look out for ‘Super’ versions of existing Nvidia cards; the GeForce RTX 2070 Super offers RTX raytracing, with more CUDA cores and a higher clock frequency than its non-Super sibling for less than £500. Competing in the same space is AMD’s new RX 5700 XT – the first of its GPUs to use the ‘Navi’ architecture – which delivers RTX 2070-level performance for less than £400.
We’ve split 3D rendering out because it’s a very specific case. The last few years have seen a huge shift away from CPU rendering to the GPU. There are loads of dedicated GPU-based apps such as Octane, Redshift and Cycles, while traditionally CPU-bound renderers like Arnold, V-Ray and Keyshot have recently added GPU acceleration. They all currently use Nvidia’s CUDA libraries to perform real-time ray tracing, and, naturally, require an Nvidia GPU (and as many as you can slot into your PC case). The newer RTX cards show real performance gains – up three times faster in Octane, for example.
However, while Nvidia has owned this market for about five years now, AMD isn’t out of the running. It has its own software, called Pro Render, which operates using OpenCL and works on both Nvidia and AMD GPUs. Also, the developers of Octane and Redshift have both committed to bringing their renderers to Apple’s Metal API, which is only supported by AMD cards. AMD is also working on its own implementation of hardware ray tracing, with its next-generation rDNA GPUs rumoured to be announced at CES in January 2020.
At this moment, we can’t really argue against going Nvidia to make use of CUDA-accelerated renderers, but the battle for GPU rendering is far from over.
For workstation-class GPUs, Nvidia has its Quadro range while AMD’s are branded as Radeon Pro. At first glance, the specs of the cards will look similar or occasionally weaker than their consumer-level equivalents, but they’re designed for precision and robustness, more than out-and-out speed, like a gaming GPU.
Workstation cards are aimed at engineers, designers, 3D animators… anyone visualising computer graphics. The drivers are certified for use with specific apps, and they’re also built to handle multi-million polygon scenes, so often come with large amounts of error-correcting (ECC) VRAM, usually a minimum of 8GB and as much as 48GB.
One other reason to consider a Quadro or Radeon Pro card is if you want to work in 10-bit colour using a compatible monitor (typical GPUs only use 8-bit colour). Workstation-class cards produce more accurate colours and will let you work in high dynamic range (HDR). Having said that, a recent Studio driver update by Nividia has opened up 10-bit colour to its GeForce and Titan GPUs, making the argument for a Quadro card even less convincing.
Workstation GPUs are more expensive – prices start around £700 and price tags in the thousands aren’t uncommon. But they should work faster in the apps they’re specified for, and operate stably and more reliably, especially where high precision is required. You can no doubt use a consumer-level card, but if your livelihood is reliant on producing accurate results, on time, every day, it might be a false economy.
The right GPU can offer real improvements to your workflow, but don’t get too hung up on the selection process. There are lots of cards that boast slender wins in specific benchmarks, but in truth, there aren’t that many applications where spending another £300 or £400 will get you any major advantage. Most cards within the same price range offer very similar performance (give or take a few percentage points), and it's only with things like CUDA acceleration and raytracing, that the benefits become clear.
Nvidia and AMD continue to launch new cards on a regular – and somewhat bewildering – basis, each with marginally improved performance. With so much power in the latest GPUs, as long as you’ve done your research, you can’t really go that far wrong.