dragontamer 6d
1. While I agree we're beginning to reach absurd proportions, lets really analyze the situation and think about it.

2. Are there any GPUs that actually have performed physical damage on a motherboard slot?

3. GPUs are already 2-wide by default, and some are 3-wide. 4-wide GPUs will have more support from the chassis. This seems like the simpler solution, especially since most people rarely have a 2nd add in card at all in their computers these days.

4. Perhaps the real issue is that PCIe extenders need to become a thing again, and GPUs can be placed in an anchored point elsewhere on the chassis. However, extending up to 4-wide GPUs seems more likely (because PCIe needs to get faster-and-faster. GPU-to-CPU communications is growing more and more important, so the PCIe 5 and PCIe 6 lanes are going to be harder and harder to extend out).

For now, its probably just an absurd look, but I'm not 100% convinced we have a real problem yet. For years, GPUs have drawn more power than the CPU/Motherboard combined, because GPUs perform most of the work in video games (ie: Matrix multiplication to move the list of vertices to the right location, and pixel-shaders to calculate the angle of light/shadows).

db48x 6d
At this point both the motherboard and graphics card need to be mounted on the back plate of the chassis, so that they can both use tower coolers. You can already use a PCIe extender to achieve this, but it should become the standard.
SanjayMehta 6d
New design: Switch things around and stick the CPU into a slot on the GPU.
rascul 6d
Why can't I find a recent GPU that doesn't take up half my case? Do the GPU manufacturers not care about people who want more than onboard graphics but don't need the most powerful gaming GPU?
cabirum 6d
My motherboard says it has "reinforced" pcie slot :)

But yea, I have to say peak power consumption has to be regulated so companies compete in efficiency, not raw power.

superchroma 6d
It's not a motherboard problem. How would you integrate support for user provisioned cooling options (to match cooler to the card wattage) and still keep any sort of flexible expansion slot area? GPUs can't be turned into a single chip, there's too much going on, so you're never going to have a CPU cooler situation. So, fine, what if you made them daughterboards mounted like M2 SSD's; that may work, except ATX now has to become ITX to give room for an extra board's worth of space.

It's a PC case orthodoxy issue, really. People want plugs at the back of the box, which dictates how the GPU must sit in the case, and disagreement on GPU sizing means no brackets. Solve these two issues and life gets a lot better.

Or, solve it like SFF case guys solved this problem, by using a PCIE extender cable to allow the GPU to be mounted wherever you like.

bitxbitxbitcoin 6d
Turn the tower on its side so the motherboard is parallel with the ground and the weight of the GPU keeps it in the PCI-e slot. It is my understanding that GPUs are able to still properly dissipate their heat in this configuration.

Great article!

termie 6d
Compare the size of the new air-cooled 40XX cards and the iChill 4090, which is tiny by comparison. The simple answer is just to use liquid cooling if you have a card using 400w. Then all the absurdity just goes away.
teddyh 6d
Stop making cards wider; Bring back full-length cards! With cases having guided slots for them!
ramesh31 6d
Why can't they be integrated? It seems like the component manufacturers should easily be able to make motherboards with a CPU and a GPU socket. Even if the GPU is soldered, it would still be pretty painless to upgrade your motherboard/GPU whenever a new chip came out.
zaptheimpaler 6d
How about we move to external only GPUs with huge connectors? If GPUs are half the size, power consumption and price of a PC now, they might as well be a separate device. As a bonus the rest of the motherboard & PCs actually get much smaller. A PC without any spinning disks could conceivably just be the size of a NUC by default, something you can travel with when you don't need the beefy GPU.
winkeltripel 6d
Could we just follow the atx spec? There is a maximum length for expansion cards, and at that end there are optional supports. These are in servers already. Just start using all that case volume to support the GPU.
msbarnett 6d
> A 4080 series card will demand a whopping 450 W,

No, that was just a rumour that was floating around. The 4080 16GB model is 340W TGP, the 12 GB is 285W TGP out of the box. The 3080 (10 GB) was 320W TGP, as a comparison point.

frostburg 6d
This specific problem can be solved by rotating motherboard 90 degrees (there are a few Silverstone cases that are laid out like this, they also tend to have excellent air cooling performance).
anigbrowl 6d
What if we just have blocks of quartz and use laser arrays to build photonic switching junctions, no more cooling problems because it's just photons ¯\(°_o)/¯

Seriously though, I imagine it's only a matter of time before these engineering decisions are themselves handed off to machines.

intrasight 6d
It's the IBM PC legacy. The won and we've lived with that form-factor now for 40 years. A new PC looks very much like that of 1982. Back in '82 when I started in robotics tech, we mostly used VME. A super-robust interconnect platform. There is no "motherboard" with VME and similar bus architectures. There is a backplane. Why can't we have the benefits of a physical platform like VME but with the electrical form-factor of PCIe?
chx 6d
There used to be so called PIO motherboards from China. These were slightly larger than ITX and the PCIe connector was 90 degrees rotated so the video card was planar with the motherboard.

And if we are to reform our computer chassis anyways, we could move the PSU to straddle the motherboard and the video card and even have the VRM inside. High amperage "comb" connectors exist and VRM daughtercard motherboards existed Change the form factor so two 120mm fans fit, one in front, one in the back.

So you would have three 120mm front-to-back tunnels: one for the video card, one for the PSU, one for the CPU.

lstodd 6d
All you have to do is put an aluminum I-bar on top of the card if the cooler itself doesn't provide adequate rigidity, which I doubt.

I'd guess if excessive stress on the PCIe slot was a problem, it'd be solved by combining a good 2-3 slot mount on the back side and enough aluminium+plastic to hold the rest.

Lramseyer 6d
> Should we have GPU VRAM slots alongside CPU RAM slots? Is that even possible?

I chuckled a little at this because I used to wonder the same thing until I had to actually bring up a GDDR6 interface. Basically the reason GDDR6 is able to run so much faster is because we assume that everything is soldered down, and not socketed/slotted.

Back when I worked for a GPU company, I occasionally had conversations with co-workers about how ridiculous it was that we put a giant heavy heatsink CPU, and a low profile cooler on the GPU, which in today's day and age produces way more heat! I'm of the opinion that we make mini ATX shaped graphics cards so that you bolt them behind your motherboard (though you would need a different case that had standoffs in both directions.)

userbinator 6d
Every time I see the sizes of GPUs increase, I'm reminded of this from over 2 decades ago:

hakfoo 6d
This is (much less of a) problem on a flat layout, like what used to be called a "desktop" case, instead of the conventional tower. Then the weight of the card is just pushing straight down in the direction the card already wants to be oriented.

I'm using a pretty heavy modern GPU (ASRock OC Formula 6900XT) in a Cooler Master HAF XB with that layout, and sagging and torquing is not much of a concern. The worst part is just fitting it in, since there's like 2mm between the front plate and the board-- you have to remove the fans so you can angle the card enough to fit.

I also suspect that if we went to the 80's style "a full length card is XXX millimetres long, and we'll provide little rails at the far end of the case to capture the far end of a card that length" design, it would help too, but that would be hard to ensure with today's exotic heatsink designs and power plug clearances.

MrFoof 6d
I made this GIF to illustrate the point of how large these new high-end NVIDIA Lovelace consumer GPUs are:

This is the ASUS RTX 4090 ROG STRIX. Air cooled, no waterblock. That is a mini-ITX form factor motherboard, hence why it looks so comically large by comparison.

This is one of the physically smallest 4090s launching. Its confirmed weight is 2325g, or 5 ⅛ lbs. Just the card, not the card in its packaging.

flenserboy 6d
Perhaps game makers could focus on gameplay and story instead of whatever unnecessary detail is chewing through so much data. The big iron is great for actual work, but is pure overkill to have in some junior high kid's room. Just an idea.
Taniwha 6d
This is not a new problem, back in the late 80s I worked for a Mac graphics card developer ... We made some of the first 24bit accelerated graphics cards.

Our first was just an existing card with a small daughter card with pals an sram on it, it was so easy that we got our own logos put on many of the chips to put the competition off the scent, we designed that one in days and got it to market in weeks.

We immediately started on 2 more designs. The next was all FPGA, it was as big a nubus card as one could build, it pulled too much power, and tilted under it's own weight out of the bus socket (Mac's didn't use screws to hold cards in place, that happened when you closed the chassis). We got it out the door about the point that the competition beat the first board's performance.

The final card was built with custom silicon, designed backwards from "how fast can we possibly make the vrams go if we use all the tricks?", In this case we essentially bet the company on whether a new ~200 pin plastic packaging technology was viable. This design really soaked the competition.

In those days big monitors didn't work on just any card so if you owned the high end graphics card biz you owned the high end monitor biz too ... The 3 card play above was worth more than $120m

ascar 6d
I'm surprised this isn't mentioned here more clearly: Some high end cases like the be quiet silent base I'm using have the option to mount the graphics card vertically, basically parallel to the mainboard, in a separate anchor slot. It needs an additional connector cable (~$20), but other than that is easy to setup, looks better with a look-in case (the illuminated fans face the glass side) and the weight pulls on a special anchor point just for that with no electronics involved. Plus the card itself is more sturdy in that orientation and there is no issues with bending through its own weight. It might even be beneficial ventilation-wise as the graphics card no longer causes a horizontal divide (basically creating separate ventilation-zones on the top and bottom of the card).

Yes, the cable will add approximately ~0.3ns of additional latency due to the added 10cm of distance.

This is how it looks like:

stormbrew 6d
The thing I don't get is why are we so damn stuck on towers as the default form factor? It's pretty much masochism to mount a sensitive piece of 8 billon layer circuit board vertically and then hang a bunch of blocks of solid heat conducting metal to it from the side, held on only by a soldered on piece of plastic.

Bring back proper desktop cases!

chubs 6d
What if a 4-slot-wide GPU had 3 'dummy' slots that plug into the other unused PCI slots, no electrical connection, and only acted as support?
mastax 6d
All In One liquid coolers are the answer. They were already getting popular in the RTX 3000 series. They make the card thinner and separate out a lot of the weight. They can't cost much more than some of those massive conventional coolers.
wtcactus 6d
Although the idea of the author focus solely on how to fit/support the card in the motherboard, and to provide an existing cooling solution, I actually find it a bit too much, this race to higher performance based on increasingly higher power requirements.

New NVIDIA cards will draw 450W, and, even if you lower that in settings, the all package will still need to be manufactured to support those 450W at various levels.

I wonder what are games doing that require that extra power, seriously. I, personally, would much prefer to slightly have to lower settings (or expect devs to take at least some basic steps to optimize their games) than have a 450W behemoth living inside my computer.

Meaning, 40xx series will be an obvious pass for me. My 1080 Ti is actually still great in almost all aspects.

virgulino 6d
I liked the new 4090. 12 fans seems reasonable for a sub 1 kW card. Those 2 AC power connectors on the back are a nice innovation. Great benchmark numbers. That they managed to have 21 cards in stock at launch is fantastic!

The 4090 Ti looks fantastic too. Totally worth the risk of fire.

bmitc 5d
What is the end game of consumer GPUs, primarily for gaming? It seems wasteful (?), not sure of the right word here at the moment, to put all this effort into GPUs for general purpose computers and the downstream problems (cooling, space, mounting, etc.) to get arguable improvements in gaming experiences. There seems to be an outright arms race amongst consumers and manufacturers alike for all this, and I personally am not sure why this stuff is so in demand. Are there other consumer markets where high-performance is so accepted and common place?
patates 5d
The GPU should be the motherboard and we should install other lowly components (like the central processor) on top of them.

But seriously, 450 Watts on this day and age of increasing energy prices? Crazy.

qwerty456127 5d
Back in the days when I was a kid tower PCs were comparably rare and most of the PCs used the horizontal desktop design which essentially is the same like a tower but put on its side. People would often put the monitor on top of it to save the desk space (see the Windows 95-2000 "my computer" icon). Isn't it time for that to return so we wouldn't need “GPU Support Sticks”?

By the way, what actually dissatisfies me is the majority of mainboards having too few PCIex slots. Whenever I buy a PC I want a great extensible future-proof mainboard + very basic everything incl. a cheap graphics card so I can upgrade different parts the moments I feel like . Unfortunately such many-slot maininboards seem to all target the luxury gamer/miner segment and be many times more expensive than ordinary ones. I don't understand why some extra slots have to raise the cost up 10 times.

Thorentis 5d
Why do we need to keep plugging the GPU directly into the board? Why can't GPU makers ship cables that go into the PCIe slots, and then connect to the GPU, then we can mount the GPU somewhere else in the case (perhaps case makers can start adding GPU slots where the DVD drives used to go or something).
qwerty456127 5d
> Maybe we can make motherboards with a GPU slot next to the CPU slot and have a unified massive radiator sitting on top of them

Sounds reasonable, we already used to have separate CPU and FPU sockets in the distant past.

However, isn't it nice every extension card incl. GPU cards uses the same unified connector standard and can by replaced with anything very different in place? Wouldn't switching to the an MXM form-factor, introducing an extra kind of slot, be a step back? Haven't we once ditched a dedicated GPU card slot (AGP) in favour of unification already?

fancyfredbot 5d
NVIDIA makes a socketed version of their data center GPUS. The socket is called SMX. It would be cool if consumer board partners and motherboard manufacturers used it too.
alkonaut 5d
Why not just make an ATX GPU with a CPU slot on it? With the size and cost of these things, it's the rest of the machine that feels like a peripheral and not the VGA.

The GPU is the main part of the machine by cost, weight, complexity, power consumption. And it's not even close.

Havoc 5d
The gamers nexus video is worth a watch for entertainment. The marketing has been cranked up to 11. Graphics cards promising their buyers "absolute dark power". Honestly...