Forums

Topic: General Xbox Series S Thread

Posts 101 to 120 of 148

Kefka2589

@Darylb88 I really have no idea about Watch Dogs. That video Jason Ronald did breaking down the Series S showed some quick clips of Watch Dogs and it looked to be ray traced. But I really have no idea if that was actual Series S footage. I still had the option to opt out of my S and into an X and I did it without blinking. Oh well. Maybe it'll change my mind later. But I'm not rolling the dice on it.

Kefka2589

Kefka2589

@graysoncharles See, that's the part we just have no way of knowing. At the bottom of the video it says something akin to "Series S hardware and Series S games are still under development." So I don't know if maybe they just didn't have a Series S version of Valhalla or Watch Dogs quite ready to show off. I had ordered a Series S based off of that presentation, but the more info I see coming out the less I want it. The whole thing is honestly getting a bit disappointing. I wish they'd just show us the stinkin games already.

Kefka2589

Kefka2589

@Senua 1080p60 with RT, yeah. The PS5 modes are exactly the same as the Series X modes.

Kefka2589

Senua

@Kefka2589 There’s a reason why more expensive GPUs surpass performance due to higher CU count, it’s just not about mere 4K resolution, it’s about the additional textures, special effects, high res ray traced AO, GI, volumetric shadows, DLSS/DirectML etc. which will absolutely require more CU headroom. Some next-gen optimisations like mesh shading geometry engine, VRS Tier 2, Sampler Feedback are necessary to free much of CU headroom and something which Capcom has said they are still scratching the surface of Scarlet generation consoles.

Edited on by Senua

Senua

Senua

@graysoncharles “To better handle 4k” Except it’s only supposed to render at 4 times less pixels and targeted for different set of demographic. Also terraflops cannot be compared like that across generations. A 6tf GCN is actually weaker than 4tf RDNA2 with not only huge IPC improvements but also additional optimisations and features. According to the hardware team, Series S is designed in-line to deliver at 1080p/1440p optimised experience afterall. For good RT performance, even 36CUs is too less and do not provide desirable headroom without making other compromises (low res reflections with hybrid screen space solutions). But it’s targeted for a different set of casual market who will hardly care about Digitalfoundry analysis but just wants get into some casual gaming. Hats off to Xbox hardware team, they have actually made no compromises on things that actually matter (like CPU, I/O) assuring smooth performance for next-gen game designs without any noticeable difference to casual gamers. Rest of the graphical features will be scalable like it has always been for PC.

Edited on by Senua

Senua

Kefka2589

@Senua I'm perfectly aware of that. I don't require a lecture.

@graysoncharles Of course one TF of RDNA2 is greater than one TF of GCN. Jason Ronald presented the Series S as basically having roughly the same GPU power as the One X, with all the next gen bells and whistles, just without the 4K target. So I put a little faith in the guy and took his word for it. But I think a 4TF RDNA2 just isn't going to cut it in some aspects. So to hit that $300 price tag they really had to take a hit on the GPU and SSD. Either way the machine will preform perfectly fine. But I just don't think it's going to be like a Series X, just lower resolution, like they sorta presented it as. We're going to see lower texture quality, lighting, particle effects, the whole lot I'd say. Which doesn't bother me in the grand scheme of things. It's honestly what I expected, but like I said, I was being generous and took MS's word that it would be more than what it actually is. The Coalition has Gears 5 running in 1080 at 120fps and it's very impressive. But that surely required some hefty amounts of optimizing. I just don't see many devs spending that much time polishing Series S versions of their games. So we'll see what the future holds for the little guy. But in the end, I think it'll do just fine for a lot of people.

Kefka2589

Kefka2589

@Senua Sorry, not trying to be rude. Perhaps the comment came off the wrong way when it wasn't intended to. But either way, the PS5's GPU is just plain weird to me. I don't think it's a bad GPU by any stretch, but in terms of performance, I don't see any math that could indicate that the PS5 holds an advantage. But on a lot of games I don't think it's going to make a huge difference in most cases. And as you eluded to, the CU count here matters because the RT ability is basically baked into those. So I think if there will be a huge difference anywhere, that's surely a place we'll see it. But my guess is that it'll probably be a repeat of the last gen refreshes. The X will look a shade better, but the Pro still ran games more than acceptably. Either way, I'm getting a Series X and a PS5. So it doesn't matter to me. I'll just play it on whatever platform unless there's just a Bayonetta 360 vs PS3 scenario. Lol

Kefka2589

Senua

@graysoncharles “Flops are flops, no matter the architecture” Nope they are not. Flops don’t paint the complete picture in comparisons across different architecture types, in which something which takes X operation cycles might take way lesser cycles to do so in another more efficient architecture. It’s the reason why the Series X backcompat titles running on older GCN emulation is utilising the pure bruteforce of new generation while consuming less power wattages compared to One X but giving more performance results but titles like Gears 5 which seems to be have implemented some RDNA2 features is actually now drawing more power since it utilises the additional features and headroom for optimisations that’s left out by the previous “GCN emulation” titles on backcompat mode. This is the reason behind why numbers/second metric of one architecture cannot be compared with another as its misleading and the effective performance potential calculations are different for different architectures. Maybe @BAMozzy can throw some more light on this subject.

Also it’s not really absurd to expect next-gen games (most of them are far away) to absolutely optimise for entirely new architecture/platforms and lot of new performance potential that comes with it. Will expect this from first party studios as well as third party since lot of them (like EA) are already trying to revamp their engines for next generation.

Edited on by Senua

Senua

MikeHiscoe

Was able to get a pre-order for a Series S today, which I feel good about. I'm comfortable making the move to all digital but I was wondering of Microsoft has ever considered a disc to digital program like we've seen in the movie industry. I don't have a large collection of Xbox One discs but I would love to be able to just sell or store away my Xbox One for good.

I suppose eventually anything I have on disc may be available on sale, on Game Pass or as a Games with Gold freebie, but I'd be willing to pay few bucks from the get-go just to have my full Xbox One library on my Series S.

What are the rest of you planning on doing with your old games on disc?

MikeHiscoe

Kefka2589

@graysoncharles And there's truth to that. A TF is a TF. But most of what I've read seems to agree that RDNA2 is FLOP for FLOP at least a 25% performance gain over GCN. So I think that efficiency really matters. I'm also just repeating what Jason Ronald said. That was his claim, not mine. He's surely an expert and I'm not, so until another expert clearly proves him wrong, that's all we really know. I've never been 100% on board with the TFs don't matter. They do matter, but other stuff also matters. I think it's just considerably more complicated than one number can define. But yeah, the Nvidia vs AMD thing, I've seen what seemed like a clearly weaker Nvidia card put a hurting on a stronger AMD card. But again, like you said, that has a lot to do with drivers and priorities. So that plays into why I think other things matter side of this.

Kefka2589

BAMozzy

@graysoncharles @Senua The Flops calculation is a theoretical maximum amount of floating point operations a GPU can do per second. Its based on the number of available shaders and how many cycles per second.

However, no GPU is 100% efficient and there is latency between parts. If its having to wait for instruction from CPU, that's some dropped 'flops' there, waiting for data from RAM, that's more flops dropped, waiting on another area to calculate something, more dropped Flops...

Newer architecture look at reducing the latency between everything so the GPU isn't waiting as long and look at improving the efficiency so that more of the GPU is working more of the time.

No GPU can get more out than is technically possible. The Series X cannot get more than 12.1Trillion Floating Point Operations, just like the Xbox One X can't get more than 6trillion but because of the efficiency and latency of the Xbox One X architecture, it may only be able to say do up to 3trillion because of latency and efficiency issues. Because the Series X has 'improved' architecture, its able to do up to 9trillion - 50% improvement over last gen because with the efficiency and latency, a 12.1TF GCN would be able to do about 6trillion Floating Point operations.

We tend to look at it from the wrong perspective - that the newer architecture is more powerful - that too match a newer GPU, you would need a more powerful older GPU (for example a 4TF new is like a 6TF old). Really, its more down to the fact that newer architecture is being used more of the time. Its actually getting closer to that theoretic maximum. GPU's don't use every single shader on every single cycle so its 'impossible' to hit that theoretical maximum - but it is possible to improve the efficiency (have more shaders working more of the time) and reduce the latency (the delay between sending and receiving instructions, data etc) to get closer and closer to that theoretical maximum.

Technically Flops are Flops regardless of the architecture - its a theoretical calculation. However, new architecture gets closer and closer to that theoretical maximum. Its all down to reducing the latency and improving the efficiency so that the GPU is performing more Floating Point operations per cycle than previous architecture...

Edited on by BAMozzy

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

Senua

@graysoncharles Maybe (in non gaming applications like data science, video production, CAD). Maybe not. 3070 has hardware support for RTX I/O decompression and DirectStorage API support for which it doesn’t really need to load all textures at once.

Edited on by Senua

Senua

BAMozzy

There are tricks that the Series S can use if the devs implement them - Sampler Feedback is a good example where the RAM (or lack of) can be offset and there is a big difference in CPU. Not just in terms of speed, efficiency etc but also in the fact that the CPU isn't needed to decompress textures either.

As for the 6TF of XB1X vs 4TF of Series S - if the efficiency and latency improvements are such, the 4TF can match the 6TF. If, for example, the old architecture is only able to operate at up to 50% that theoretical maximum - ie capable of doing 3trillion floating point operations and the new is 75% efficient so also does up to 3trillion floating point operations, then its a match. As I said, no GPU actually manages to get 100% efficiency - meaning that it matches up to that theoretical maximum. There are times when the shader is not working during 'every' cycle as its waiting on data or instruction from other areas every 'cycle' and at 1800hz, there is 30cycles per 60fps frame. A fraction of a second delay from getting the instruction from CPU can miss a couple of cycles where its doing nothing.

So whilst the calculation is the same, the theoretical maximums are somewhat an indication, they are only worth comparing the same architecture and genus. Its no point even comparing the same 'generation' of GPU's from different manufacturers because they will have different efficiency and latency which is why nVidia can outperform an AMD GPU with the 'same' theoretical maximum. NO GPU is 100% efficient so NO GPU will do that theoretical maximum. The Series X will NOT do 12.1trillion Floating Point Operations per second, the Series S will NOT do 4trillion, the XB1X will NOT do 6trillion... How many they can do is not known but how well they run games is often a way to show how much closer they get to 100% efficiency. A more efficient and lower latency GPU can outperform a GPU with a higher theoretical maximum because its not idle as much...

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

CedAlastor

Now that many youtubers have the Xbox series S, did anyone see what storage space is usable of the 512 gb ssd?

CedAlastor

BazzaRFC

After all the unboxings today for both X and s. I was surprised that Microsoft did not include a months free trial of game pass with the new consoles.

BazzaRFC

BazzaRFC

@graysoncharles thanks for the reply. I did not know that it would be available for such a low price.

BazzaRFC

Top

Please login or sign up to reply to this topic