Updating PC hardware in the 2020s

Optimization still requires knowledge

In December 2019 I updated my old PC to cutting edge technology. In a previous article, I speculated how building a silent computer would be nowadays quite easy using generally available components. It turned out true. What came as a surprise, though, was how much I had to study motherboard manuals beforehand, as to actually benefit from the highest speeds available a lot of things need to be taken into account.


From the old build I brought the Super Flower Golden Silent 500 W SF-500P14FG PSU, the Lian-Li PC-A17 case, a SATA DVD burner, the Thermalright HR-01 CPU heatsink and a couple of Crucial 500MX 1 TB SATA SSDs. As the PCI bus is now obsolete, I had to find a replacement for my old audio interface. I ended up buying the Motu M4. I wrote a separate review article about it.

New USB front panel

For the case I wanted to upgrade the front panel USB connectors to USB 3.0, but since my case is such an old one, parts are hard to find. Luckily there happened to be one somewhat suitable panel connector available at an importer: Lian-Li PW-IN20AV65T0. The original panel in my case has a FireWire connector, two USB ports and audio jacks. I cut the new connector panel into parts and glued it together with a new layout. The end result is quite perfect actually.


For the CPU I went for AMD Ryzen 7 3800X, an 8-core unit which is simply a fantastic all-rounder.

At least the Ryzen 7 CPUs seem to have quite funky reported temperatures compared to what I'm used to seeing. The temperatures might hit 80°C very rapidly, in a couple of seconds. The processor works in a way such that it basically "overclocks" itself all the time - there's a nominal base clock and some overclock which you'll probably achieve, and a single core boost clock which you might achieve if you're lucky. For the 3800X the base is 3.9 GHz and the maximum boost clock is 4.5 GHz. The cores on my CPU run around 4.0-4.35 GHz depending on how many of them are fully stressed. How fast yours will run is determined by the so-called silicon lottery. The temperatures start to settle after running in the high clocks for a while, so they don't keep rising to critical levels. In any case, if you have old CPU die temperature based fan control setups and you upgrade your CPU, the fan curves probably will need some adjustment.

Also the TDP (Thermal Design Power) can't be used anymore as a reference like back in the days. For example, in 2012 when I build my previous PC, I could roughly assess the CPU power usage from its specified TDP value. For Ryzen 7 3700X the TDP is 65 W, and for 3800X it is 105 W. The base clock difference is 300 MHz, but the boost clock difference is only 100 MHz. How the TDP value should be interpreted is not that 3800X would hog 105 W while the 3700X would only need 65 W - given the clock speeds that wouldn't make much sense. In real-life tests that is also the case; searching some results it seems the power usage difference varies around 1-10 W depending on situation. Considering the dynamic clock rates, the TDP is more like: given a heatsink that can dissipate TDP watts of heat, the CPU is guaranteed to be stable at the nominal base clock.

I don't know the TDP dissipation of my old Thermalright HR-01 - I'm actually not sure if that was even a thing with heatsinks in 2009 when I bought it. As the AM4 socket is compatible with AM3 socket coolers when using the clips and not the holes, I could still use the HR-01 but it required an S-clip adapter. Funny thing is, I had one for my 2009 build, but had thrown it away for my last Intel build. I got very lucky and found a single S-clip for sale at some small computer store. I'm quite sure it was the last one in Finland, possibly in whole Europe. In any case, even an old heatsink is completely fine to reuse nowadays if you just can mount it. And if you can't... well, see the end of the article for a bonus hack.

Arctic MX-4 (2019 Edition) still seems to be a very good thermal paste, I've never had any trouble using it.

Graphics card

For the GPU I picked a PowerColor Radeon RX 5700 Red Dragon on sale. Due to my passive PSU being only rated for 500 W, I went for the non-XT version. It doesn't really matter, though, as the Red Dragon is a bit overclocked and fast enough anyway.

As I figured out, nowadays most of the graphics cards are semi-passive, so silent operation on desktop is very easy to achieve. However, I think you still can't be a 100% sure about this: apparently some models, even if they say they are semi-passive, are that only because of Windows drivers. I as a Linux user needed to be sure that it's the hardware controlling the fan, not a Windows driver. This is something you might want to double check before buying. Also as usual, not all cards are built the same, so even a semi-passive model might be too loud under stress. I've been very happy with the PowerColor. It's silent when idle, quiet under stress, it has two BIOS options to further fit your power/noise limits, it has very good build quality, it's very compact, and I haven't had any problems with it. Two thumbs up for PowerColor!

Memory modules

For the memory I bought two 16 GB Crucial Ballistix Sport LT DDR4-3200 CL15, 32 GB in total. Using two modules enables dual channel technology, doubling the effective bandwidth. I honestly can't even remember if originally I tried to run these with the rated CL15; at the moment I'm running them everything auto, which puts them at CL16. These seem to be stable at least with that setting. Traditionally, I've never had memory modules that would've run 100% stable at their rated clock rate without giving them more voltage, so this is a first, even if the latency is off by one cycle.

Disk drives

For NVMe SSDs I bought a couple of 512 GB A-Data XPG SX8200 Pros and one A-Data XPG Gammix S11 Pro, which is basically the same model as SX8200 but comes with a heatsink. Since I needed three M.2 slots, I had to buy a PCIe x16 adapter for one of them.

Now this is where things get interesting. NVMe uses PCIe, and PCIe has a revision number that corresponds to different base speeds: 500 MB/s for 2.0 and 985 MB/s for 3.0. PCIe also has the lane multiplier. Hence, a PCIe 2.0 x4 link is approximately as fast as a PCIe 3.0 x2 link. What link speed you may or should use depends on the CPU, motherboard and SSD capabilities. For example, the SX8200 is rated at PCIe 3.0 x4. This puts its maximum speed limit at 3940 MB/s. Its specs also say it has a maximum read speed of 3500 MB/s and a maximum write speed of 2300 MB/s. So for intensive reading, PCIe 3.0 x4 makes sense, but for writing PCIe 2.0 x4 results in almost the same speed.

It's no use paying more for super fast SSDs if your motherboard can't deliver the required speeds. This isn't usually a problem if you only have a single M.2 NVMe SSD, but with three it requires a lot of planning beforehand. In my case I knew I was going to run two of the SSDs in RAID (for Linux), so I figured one's bandwidth shouldn't limit the other's.

On the third SSD I installed Microsoft Windows. Even if it resides on its own disk drive, Windows still can't handle a Linux installation on the same computer. Every time there's been a major Windows version update I've had to remove all my other disk drives. I've traditionally done it just for safety (because I don't trust Windows not to screw up my data), but this time it's actually a requirement. Windows Update just gave me the error 0x80070002. When manually trying to run the update assistant, it complained about disk space. Everything worked once I removed my Linux SSDs from the system. If you run a dual operating system setup, be prepared to do the same.


In my previous PC article I wrote the following:

Nowadays, this one is pretty easy. Back in the days, around 2005, you'd have to carefully pick a motherboard which did not have a tiny fan. But motherboards with fans are long gone, and it's usually enough to buy one with good quality capacitors.

Enter 2020 and the X570 chipset with PCIe 4.0 support. It reintroduced the tiny cooling fan, apparently as part of its specification. That was out of the question for me. I wasn't going to need the PCIe 4.0 support either, so I opted for B450 chipset which was available for cheap.

As mentioned in the disk drives section, I read through manuals of at least five different motherboards to make sure I could run all my SSDs at optimum speeds. I ended up choosing the Gigabyte B450 Aorus Pro.

There's a PCI Express x16 slot, running at PCIe 2.0 x4 speeds when the rest of the PCI Express slots (except for the main one, for GPU) are empty. There are also two M.2 slots: one PCIe 3.0 x4/x2 and the other PCIe 3.0 x2. I'm running my Linux RAID setup on the PCIe 2.0 x4 and PCIe 3.0 x2 slots and Windows on the PCIe 3.0 x4 slot, so the bandwidths are pretty much optimized - nothing is wasted but there aren't much bottlenecks, either.

If you have a lot of SATA disk drives it's good to keep in mind how usually an M.2 slot eats up two SATA slots. This is due to the limited number of lanes provided by a chipset. So even if the Aorus Pro has 6 SATA slots, only two were available. I had two Crucial SSDs to put in those, but for the old DVD drive I bought a cheap USB-SATA adapter. It works absolutely fine, though, and since I put it inside the case, it was a very nice solution.

I've pretty much used up all the lanes and bandwidth the B450 has to offer, so definitely a lot of bang for the buck. Of the B450 itself I don't have bad things to say.

Aorus Pro woes

Even if the B450 chipset has worked very nicely, I've had a couple of nasty problems with the Aorus Pro. The first one is related to buggy firmware, and the other one maybe to circuit design - it's hard to say.

UEFI boot magic and standards-compliance

After using the computer happily for a few months, I was suddenly greeted with the following text on boot:

Reboot and Select proper Boot device
or Insert Boot Media in selected Boot device and press a key

I was baffled since I hadn't done any changes. I also had a problem of my Microsoft keyboard getting cut off power when the error appeared, so I had to go and push the hardware reset key, then go to BIOS to figure out what the problem is. Turns out there was no apparent problem. Fiddling with the boot order sometimes helped, sometimes it didn't. Usually after scratching my head for a while, the problem suddenly went away. Until it occurred again. And again. Suddenly, after few months of use, my motherboard losing the boot device became a recurring problem.

At some point when the boot error occurred, I noticed that the computer could always boot to Windows if I just set the boot order so that the Windows SSD was the primary one. My disks are setup so that I have three M.2 SSDs, two which have a mirrored Linux on them, and one with Microsoft Windows. Each SSD has their own EFI System Partition (ESP). One Linux SSD has a copy of the other's ESP for backup purposes. I normally boot from the Linux ESP using rEFInd, and just select the correct boot loader/kernel from there. This is where the UEFI standard comes into play, and especially how badly motherboard manufacturers follow the standard.

If I put rEFInd to <ESP>/EFI/refind/refind_x64.efi, it should be found by a standard-compliant UEFI firmware. However, it's not surprising if the motherboard fails to find it.

The location <ESP>/EFI/BOOT/bootx64.efi is the default fallback location for UEFI boot. This is where I moved rEFInd and used it successfully for the few months before encountering the boot error.

I realized that by renaming rEFInd as the Windows boot manager, i.e. <ESP>/EFI/Microsoft/Boot/bootmgfw.efi, the computer would always boot. From BIOS I selected the "Windows Boot Manager" on correct SSD, even if it's actually rEFInd. I've never encountered the boot error anymore after doing that.

So, if your motherboard has a problematic UEFI firmware and has troubles booting EFI boot managers, try renaming your boot manager as the Windows boot manager and see what happens. The only downside is that sometimes when updating Windows, it also overwrites the custom boot manager. This is where backups or simply redundant copies of each boot manager help tremendously, so luckily it's easy to fix.

The machine spirits aren't always favorable

Now with the UEFI boot problem solved, I ran my PC with zero problems for over half a year when suddenly one day, after 14 months of use, it wouldn't start. In fact, there seemed to be no sign of power on the motherboard. My first thought was obviously that the PSU (Power Supply Unit) must have died - after all, it was 4 years since I got a replacement part from warranty and the original one broke down similarly after 4 years. I almost ordered a new one but decided to double check and borrowed a PSU from a friend of mine.

To my surprise, the motherboard was still completely dead. Eventually, except for the CPU, I stripped every component off, including the BIOS battery. I laid the motherboard on a cardboard box to make sure there were no short circuits or anything like that. Nothing. I measured a normal voltage from the BIOS battery, but still put in a replacement. And suddenly there was power again. Actually the battery probably had nothing to do with it, as the motherboard runs just fine without one. My diagnosis is that there is a hardware bug in the power circuit such that it sometimes refuses to pass on any power, and that bug got resolved when the charge of the capacitors wore out, or something like that.

I searched the Internet for the problem and found three similar cases: one guy had RMA'd their motherboard and got a new one, another had a broken power button in their computer case, and the third one just reassembled their computer and everything worked, so they figured it must've been a short circuit. That broken power button is of course possible, but I'm pretty sure there were no short circuits in the other cases. It was probably just the same random problem that I encountered.

Diagnostic LEDs
Suddenly the motherboard got power again. As I stripped out even the memory modules, the DRAM error LED was lit.

So I just put everything back as they were and now everything has been working normally again for a few weeks. Motherboard bugs aren't actually that rare in my experience - out of the last four motherboards I've bought, not a single one has been completely bug free. But I think this is the first time I encounter this kind of bug with the power circuit. Lesson of the story: if you get no power, it might just be a rare occurrence and nothing is necessarily broken. "Have you tried taking it off the mains power and turning it back on again?"

At least I was reminded how to test if an ATX PSU is working or not. There's a green wire in the 20/24-pin ATX power connector. Just connect that with a ground pin (any black wire) and the power supply should turn on. If the wires are all black, the green wire pin is the one on right, fourth from the top, when you look at the connector directly towards the pins and hold it so that the connector's clip is on the right side. Ground pins can be found immediately underneath and above it.


Originally, I didn't update my 42" FullHD Sony television that I used as my display. It broke down in summer 2020 and I decided to buy a 40" Panasonic TX-GX800 4K television as a replacement. Too bad that model, as apparently so many nowadays, comes with a stand that not only raises the television a bit but also tilts it a bit upwards. Since the surface on the GX800 series is super reflective and the viewing angles are horrendous (this is actually why I wouldn't recommend the GX800 series to anyone), this was a big problem. I decided to reuse the stand from my old Sony. After a bit of modding I managed to create a visually appealing end result that not only is much more ergonomic but also saves space.

There didn't seem to be much options around the 40" range last summer, it was basically either the 40" TX-GX800, a 48" LG CX or a 43" Acer Predator. I was a bit scared to buy an OLED for desktop use, and the Acer was a bit too expensive because there was a risk it would've been just too bright - I read from a review how even at minimum brightness it is very bright. In hindsight, I think I should've bought any of the other two, but at least the Panasonic wasn't that expensive, and it's not that bad after calibration, either.

I tried the HDR10 feature with Forza Horizon 4, but it's pretty much the only thing I've managed to use HDR with. I'm not a fan of ultra-bright displays, so I don't see the HDR stuff necessary for a modern setup even if it was supported. It's maybe nice if it works well, but in practice you probably don't need it as it still seems to be so rare a feature. A 100 Hz or even faster panel is much more important, but I'm fine even with the traditional 60 Hz one.

Bonus: hacking a Phenom II heatsink on Intel LGA1155 socket

After putting together my new computer, I sold the old parts as a bundle. However, as I reused my CPU heatsink, I had to figure out an alternative one for the old Intel setup. A friend of mine had an original, unused AMD Phenom II cooler lying around. I used the Thermalright LGA1155 mounting bracket screws and cable ties to fit in the AMD cooler. It barely fit, there was maybe 1 mm of leeway. But it worked absolutely perfectly. These are the kind of DIY computer hacks I feel excited about. 😛

Send me email or comment below:
(Please note: comments with direct links to commercial sites might not be published.)
Creative Commons License  This article by Olli Helin is licensed under the Creative Commons Attribution 4.0 International License
Powered by GainCMS