Fine Wine

9070xt saved my life

What do the two numbers in each bar mean?

Lower is 1% lows, higher is overall average

More to the point I have a 4090 I thought this was supposed to be future proofed? Why the fuck is the 2nd fastest card on the market not able to hold 60fps this is insane

The 1% lows on 4090 are awful and I'm surprised how good they are on the 9070 XT. I'm still in awe how shitMD managed to feed the 9070 XT with slow ass discount GDDR6 memory. Nvidia has like twice the bandwidth for the same class of card and yet AMD isn't bandwidth starved.
Now they just need FSR4 in every game and a real flagship with UDNA and we'll finally have a competent nvidia alternative. Not a Radeon that only serves to make nvidia cheaper but a Radeon that actually might want to buy over nvidia.

4070ti super

$800 USD

47fps

rtx40/50 are the equivalent of the 9900k/10900k with outdated cores and bruteforcing performance with core count and last level cache while rdna4 is a new uarch like zen3 was

he fell for the future proof meme

LOL

Average FPS (self explanatory) and 1% lows (essentially worst case scenario FPS)
9070 XT has 8192 shaders (half of which are only used in rare dual issue scenarios, so it's really more like 4096). 5070Ti has 8960 shaders. Nvidia has 2x the shaders but not 2x the bandwidth. AMD also has way more cache

You bought a 4070 for Quadro prices. I hate to break the news to you but you are retarded. Don't worry too much all of Nvidia's fans are retarded too so you are not alone.

It IS relatively future proof. Look at where the new $1000(+++) 5080 is sitting at

The game has insane visuals and all of these results are with RT on and maxed out. The game at 4K performs like Cyberpunk or Alan Wake 2 at 1080p and has zero traversal or shader stutter.
You can reduce settings if you want giganigga FPS and visuals closer to Eternal level.

This is the level of intelligence of the average Nvidiot

reading comprehension is a rare skill these days

My nigger, RX 9070 XT has almost the same die size as RTX 5080 and it's a purely gaming card that cannot compete with 5080 in other tasks like rendering or training or even inference. It costs AMD to make more than it costs nvidia to make 5070 Ti.
9070 XT is a great card but you're delusional if you think nvidia is the one bruteforcing performance.

yeah i have a 1660 super

just.jpg - 217x320, 9.74K

i've got a 4070 super, is it ok?¡

Just turn on DLSS and it will run better than AMD and look the same.

Why don't you get a console if you want to use upscaling?

The only console with upscaling is a PS5 and it's worse than DLSS in every way, what kind of retarded question is this?

You have no idea. People have no concept of value whatsoever.

3090

I should've had listened and buyoughted a 4090 when I had the chance..they were going for as low as 1300€ last year

What the fuck are you talking about, every console has upscaling and pretty much every "4K" console game is upscaled.

every console has upscaling

They literally don't, but whatever you say.

4k

meme, don't care

161985719755.png - 1832x1029, 450.45K

Nvidia FPUs are dual issue but they can't simultaneously do 1 INT ops and 2 FP ops. If you want 2x FP on nvidia you can't issue INT ops and games use a blend of each unlike something like Blender where you actually see the uplift and 2x FP.
RDNA4 made the greatest strides in out of order execution and catching up with Ampere, RDNA3 also had double issue FP but it was broken and virtually never used. L3 doesn't explain why 9070 XT is so efficient with bandwidth either because 6900 XT had even more and yet scaled a lot with more bandwidth in a way that 9070 XT doesn't.
Nvidia does indeed has almost twice the bandwidth. 5080 has 1TB/s effective bandwidth on its GDDR7 while 9070 XT is only 640 GB/s which is very low for a card as fast as 9070 XT. AMD did some real magic here, even nvidia must be scratching its head a bit.
RDNA4 is as impressive as Lovelace was in 2022. It's nice that both nvidia and AMD are constantly pushing the envelope.

if you dont care for AI and you dont mind fsr if you got the 7900xtx for less the 800$ you are the real winner

where is RDNA1, Vega, Polaris?

L3 doesn't explain why 9070 XT is so efficient with bandwidth either because 6900 XT had even more and yet scaled a lot with more bandwidth in a way that 9070 XT doesn't.

That's Gen1 Infinity Cache. Gen2 was introduced in RDNA3 and it's like 3x faster

effective bandwidth

Just use regular memory bandwidth. Cache does help but it's too volatile to be reliably measured

960GB/s:640GB/s = 1.5
10752:4096 = 2.625
Nvidia has far more shaders than the bandwidth can help with, so yes their cards are bandwidth starved vs AMD

doesnt 9070xt have huge L0/L1 caches? at least off the top of my head

Can't run it, game requires ray tracing

There are many methods for upscaling that aren't FSR or DLSS or other variants of TAAU which these two are.
All consoles use upscaling because they can't even dream of native 4K render. Xbox SX and base PS5 have shitty naïve upscalers, spacial upscalers like FSR1 and terrible TAAU like FSR2 and UE5 TSR. PS5 Pro has PSSR which is a mixed bag but better than FSR2.
Both PC and consoles need upscaling for modern games but PC has access to quality upscaling in FSR4 and DLSS4 while consoles are stuck with ancient naïve upscaling, which is what you probably meant to say.
When you playback 720p video on your 4K display and fullscreen it you're also upscaling it, usually with simple methods like bicubic.

Doesn't matter if you don't mind FSR, devs don't even bother to add it most of the time because it's worse than XeSS like in Expedition 33. Devs putting in features for Intel GPUs over AMD GPUs tells you all you need to know.

Skill issue. Mesa drivers have raytracing for older cards.

man im gonna have this 4090 till I die.
or at least the next 4 generations

No shit. So your question is seriously why don't you use TSR upscaling over DLSS? That is even more fucking retarded. It's such a fucking retarded comment I didn't even consider anyone could possibly be asking a question that idiotic.

loonix

kek

first architecture named after a black scientist

it's the absolute worst garbage that nvidia has ever shat out

What did they mean by this?

I Am Altering the Deal, Pray I Don't Alter It Any Further.

Wide fatfuck GPU designed for AI first and gaming second is bad at 1080p

Shocker

Nvidia added all that L2 so they wouldn't suffer as much from cache misses. If you compare 3090 and 4090 with nsight you'll see how much ampere stalls.
AMD clearly has some secret sauce in RDNA4 that isn't just L3. L3 in RDNA3 had a very poor showing.
RDNA4 in general is kind of a black sheep. The density of the design is particularly striking. Apple M4 on 3nm is only 11% denser than Navi 48. AMD must be using HD cells and somehow not killing performance or having hotspots in the design. It's the most impressive GPU design AMD has shown since a long time ago.

the clockspeed ceiling on rdna4 is breddy gud too. its like they fixed everything that went wrong with the 7800xt and ensured it was cheap to produce too

I'm sorry you are too retarded to articulate what you mean and expect people to decipher your neanderthal vomit every time.
Either way, consoles do upscale games so you were objectively wrong and I accept your concession.

4080 Super beats 5080

That's retarded, sometimes I wish I got a 7900 XTX instead of the 4080 Super, I don't even use AI shit it's a novelty.

Boring. Post some interesting benchmarks like cpus, pre rt amd cards, adreno, apple, other arm shit

stop bitching and sell your 4080S for a 9070xt then

dude it's secret sauce trust me it's just AMD magic

Most of the effort in RDNA3 was to make chiplet GPUs viable. We'll see if it pays off in the future but since AMD couldn't scale it up super hard like they did with Zen 2 it's no surprise the performance wasn't amazing

The partner cards are really good. Red Devil is only a small premium over MSRP and is a top tier design, you can push it to trade blows with 5080. All the equivalently nice nvidia cards like Astral and Vanguard are prohibitively expensive and the rest are cheapo models with anemic power limits.

All I'm saying is that I should have to shell out $3-4000 for a meaningful upgrade.

Gigabyte gaming OC gpus almost always have the highest power limits of any model of gpu at a mid tier pricing. Only issue is its cooler is 2nd rate

lmao no
RDNA3 was a margins design. The chiplets were there only to make the cards as cheap as possible and the design was so dogshit AMD scrapped it immediately. It was so buggy they needed to unfuck it with RDNA3.5 for mobile because it scaled worse than RDNA2 lol.
You're underestimating how good RDNA4 is. shitMD really cooked this gen, to everyone's surprise. I'd argue Blackwell is overall a superior uarch but the jump from RDNA3 to RDNA4 was a miracle. They also stopped treating DL like a meme and went from unusably shit FSR2-3 worse than early DLSS2 to FSR4 leapfrogging DLSS3.

People get 9070 XT, start tinkering

Try overclocking, performance gets worse even though clock speeds are higher and stable

turns out GDDR6 error correction was kicking in so memory errors were suppressed and manifested as lower performance

turns out the biggest factor for overclocking RDNA4 is not +clockspeed, but how much of an undervolt you can achieve

Why is AMD like this?

I'm still on 2070 super. I'm waiting for a good time to upgrade but it doesn't seem to be coming.

Friends don't let friends buy Gigashite and AssRock. I had nothing but problems with their products. I've always had good experience with AMD AiB partners like Powercolor and Sapphire, for nvidia I usually go with Asus or PNY.

AsRock is fine outside of their exploding motherboards

same

t. 2060gger

All cards are like that nowadays. Have been for years. Nvidia also has error correction kicking in when you push the memory too hard. RTX 50 has the extra caveat of having so much bandwidth, overclocking mem can reduce performance because less juice gooes to the GPU.
6900 XT was the last card where you could get more performance by adding voltage and weren't a slave to power limit and this is only because you could overwrite the power limit to 450W or higher with soft powerplay tables.
This is from der8auer's video, right? I have no idea why he was confused because his 4090 video had the same conclusions. Card only responded to UV.

nvidia crashes

not a single issue on Radeons

Huang really gave up on gaming, even their drivers are low quality now, shame

I've seen a whole lot of AMDtards crashing and shit on Expedition 33 and I haven't had a single problem with my 5070 Ti.

see the op video and the tech reviews instead of anecdotes

based AMD saving its buyers from a shit game

Game has Denuvo

Runs like shit.

Many such cases.

This is from der8auer's video, right?

Yeah but he's not the only one that came to the same conclusion. Undervolting is now the new biggest deal if you're trying to push your HW further

Hoping i can finally update my 2070S this year, the card has given me alot of trouble fanwise but i dont want to spend more than 400 euro for a gpu nowadays.

AMD rdna2 was great for it's time too. I think the A team worked on both uarch so expect the next one to suck ass like rdna1 and 3