Forums

Topic: General Xbox Series X|S Thread

Posts 381 to 400 of 1,434

Banjo-

@Senua Exactly, if the CPU and GPU are able to perform overclocked on PS5 without issues, why use Smart Shift and why reduce the CPU power when GPU needs more power and vice versa?

Banjo-

BAMozzy

@Senua I doubt it - but 'maybe' the CPU and GPU can both run simultaneously at 'Max' - IF the power draw to other areas (like the Audio Processor) is not being pushed - after all, the Audio processor is going to be drawing power to and 'equivalent' to a PS4 Jaguar CPU. If its not being pushed, can that power be used elsewhere?

You also have to remember that games will have 'spikes' where GPU and/or CPU will be pushed for a few frames at most. Throw a Grenade and when that 'explodes', there will be a big spike as physics, Particles, alpha transparencies etc push the GPU/CPU for a few frames. A GPU, whether its 'locked' at 2.23Ghz or not, if 'optimised' so it doesn't drop a frame at that most intensive point, would still only hit (up to) 100% full utilisation for 'fleeting' moments. Instead of the utilisation dropping down to 90% or 80% for example after those 'peaks', the GPU can 'drop' its frequency down so its still running at say 98% utilisation so more 'efficient' use of power.

Having a 'static' APU means its always running at full speed so may spend much of the time only utilising 80% of its full capability with that 'headroom' to cope with the most intensive spikes that will occur. Having a Variable APU means that it can still cope with the Spikes, but the GPU is being utilised more fully in essence. Instead of drawing the power to run at 2.23Ghz and only 80% utilisation, it can drop to say 2Ghz, saving power and running at 95% utilisation.

Frame Time is 'important' in game development - the most important metric to a developer as this will determine the budget they have to deliver the Visual Quality they can at the smoothest frame rate. A CPU and GPU are not always running at full capacity during the 'frame' time - so if you have 16.666ms (for 60fps) - there will be parts of that where the CPU is working really hard and parts of that the GPU will be working hard. The first part of the frame is most likely the part where the CPU is needed the most to calculate physics and where all the objects are in the world after what even 'inputs' you have made are processed, what the AI is doing, any 'collisions' (like bullets etc) and be telling the GPU what to draw so the GPU will be rendering in the basic framework for the image, filling in all the textures, and applying all the effects before any 'processing' - like filmic grain, anti-aliasing etc. Therefore, with Smart Shift, it could shift power back and forth between each - maybe start with 'Max' CPU speed to process all the physics, AI and Draw Calls as quick as possible and then transfer the power over to the GPU as the most complicated and intensive workloads shift from CPU to GPU during that 16.666ms

Its difficult to know what the PS5 'could' do with a hypothetical situation where a game is both CPU and GPU intensive for more than say 10s (or 600frames at 60fps) that would 'benefit' from 3.5/2.23Ghz - assuming the Devs haven't optimised their game for such a prolonged spike...

Like I said, on Series X, they will optimise for a spike - and one way they tend to get a bit more visual quality overall, is by using Dynamic Resolution Scaling. If those Spikes are only occasional, they can drop the Resolution in those moments so the rest of the time, its running at 90% utilisation on average instead of say 80% to keep headroom for spikes.

The same choice will still be available to PS5 devs too - except they won't be looking at GPU utilisation, but GPU frequencies. Instead of running at say 80% speed because those Spikes need headroom so they don't drop frames, Devs can opt to have settings that drop the speed to 90% and then use dynamic resolution for those spikes too - this is illustrative to show the same principal as the Series X.

The big point though is that regardless of whether its a Static APU or variable APU, games do NOT use the full capacity for extended periods and some scenes may well be more CPU intensive, others more GPU intensive and a Dev can 'optimise' based on the peak/max performance of these during the most intensive 'spike' points in the game - knowing that if they can cope with the worst spikes, the rest will be OK. That may still mean the Series X has an advantage with many more shaders and a faster CPU but even then, if the game is 'optimised' to run consistently at 60fps for example, there will be long periods of game-play where the CPU and GPU are not being fully utilised - drawing power to keep running at that speed without doing 'anything' - not to say that is 'worse' but not the most efficient use of Power. It doesn't mean that the PS5 necessary will be significantly worse than the figures Mark Cerny gave either as it will run at those figures as and when needed for those 'peaks'. and the rest of the time, running slower to save power consumption, to reduce heat and potential noise too. Instead of being like the Xbox GPU only being utilised at 85% for the majority of the game to have 'overhead' available for the spikes, the PS5 will run at 99-100% utilisation consistently, varying the frequency to save power and reduce heat generation. When those Spikes occur, the Xbox has the overheads to cope with them and the PS5 can boost the frequency to cope with them. BOTH can also push the graphics up and use Dynamic Resolution scaling for those 'Spikes' too.

What we don't know though is what may happen if the Devs rely on Dynamic Resolution Scaling so the GPU is ALWAYS being pushed - especially if the CPU is also being pushed with AI, Physics, Draw Calls etc. If its never hitting native 4k because the GPU at 100% utilisation still can't render a native 4k in that frame time, how that may affect the PS5. I would think that the GPU may have to drop speed to boost CPU and would see a bigger resolution drop as a result - but again we don't know whether the CPU and GPU, if other areas like Audio is not being 'pushed', can run at full speed and if so, for how long without issues. It may end up being 3.3/2.0 in that situation and the Dynamic Resolution/frame rates be indicative of that. I think that's an extreme case situation that we can't yet answer - the what happens if a certain game releases that is pushing both CPU and GPU for an extended period of time as opposed to one that may have CPU and/or GPU spikes. Again though, that will also be down to the Devs to decide on how they want to optimise and can turn down settings that reduce CPU and/or GPU load - less Particles or lower NPC/crowds for example or CB rendering instead of native...

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

BAMozzy

@Senua The reason you may want to use Smart Shift is to reduce both power draw (for better economic and environmental reasons) and reduce Heat generation too and potentially the 'cost' of cooling solutions - and even reduce noise too.

If a Game is optimised well, the CPU and GPU won't need to be running at full speed ALL the time - just for the spikes when things get 'crazy' on screen. On a Series X, it will still be running at full speed just looking at the sky for example and spending much of the time doing nothing - a waste of power, costing more to run and generating unnecessary heat that needs to be controlled. On PS5, it can drop the Frequency, which drops the power consumption and generates less heat - meaning the fans don't need to run as frequently or for as long to manage that heat - which again means less power draw and lower running cost. The cooling system can be 'smaller' - especially if you make the system bigger for better natural heat dissipation so again reduce the costs of manufacturing and makes it quieter overall too.

Not to say MS's Series X won't be quiet but may have a bigger fan and more expensive cooling system that draws more power overall. Designed to run quiet, but maybe run more often as a result.

I know that environmental and running costs are often not a consideration at all for many many gamers, but Smart Shift can be used to benefit both of those issues. Again we will have to see whether this was a 'Smart Shift' from Sony to go this route or not over the generation. I doubt we will see many, if any games that are pushing BOTH CPU and GPU to the max for years to come - especially as Sony has shifted Audio and decompression away from the CPU itself. To push both CPU and GPU to the point that 'both' are bottlenecking, causing frame rates to drop is incredibly rare. DF have to drop games to ridiculously low (by todays standards) to test CPU's - eliminate the GPU as the cause of frame rate drops so it could be years before we see games that simultaneously push both CPU and GPU to the point that Smart Shift cannot shift power between these without detrimental effect - ie shifting power to GPU causes big CPU issues or vice versa. More likely, the GPU will be the limiting factor - even if its running at the full 2.23Ghz as that won't be 'enough' to run at a 'native' resolution with the highest visual settings and target frame rate - hence we will see 4k/30 with RT and not 4K/60 with RT for example - that won't necessarily be because the GPU isn't 'boosting' to 2.23, but because the visual settings are too much for a 10.2TF GPU to render at 4k/60.

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

Senua

@BAMozzy That’s really very interesting considering the fact that Series X has a custom media H/W acceleration chip that’s even more powerful than all XB1 jaguar cores combined (according to HotChips). Digitalfoundry also confirmed HRTF in Series X in a recent video which they said Microsoft didn’t choose to market as much. But if you have watched the Project Accoustics demo from Microsoft Research from few years ago it looks very promising. We have to wait and see how much of difference in spatial audio performance is realistically noticeable.
And yes I don’t really care about power draw difference from a limitless AC outlet which will mostly vary between few wattages in some cases if at all, especially given the fact they are both based on almost 7nm and Xbox Series X has a huge passive cooling block for which it only needs one big slower silent fan (according to Phil) to keep adequate circulation. But if Smart Shift is that revolutionary maybe it should come to PC hardware space since I have seen PC gamers are always very concerned about power draw/consumption from their beefy builds.

Edited on by Senua

Senua

BAMozzy

@Senua According to Mark Cerney, the Audio processing is equivalent to a PS4 CPU - an 8core Jaguar Processor clocked at 1.6Ghz. A lot of noise made about the fact that Sony now has a separate processor for Audio - although the XB1 has a separate Audio processor anyway - perhaps because of Kinect and processing voice commands but still part of the design of the XB1S and X. According to MS, their new Audio Processing is more impressive and more capable than the XB1X's CPU - an 8core Jaguar Processor clocked at 2.3Ghz - meaning that if true, the Xbox has a more powerful Audio Processor - but didn't need to make a 'big deal' out of it like Sony did because for MS, Separating the Audio from the CPU isn't 'new' for Xbox...

Anyway, my point still stands - if the Audio processor isn't being pushed by having 5000 separate points of Audio ricochetting around the environment and processing how 'each' sound will be heard through a stereo headset, then the Audio processor isn't drawing as much power as it theoretically could. Therefore, will that affect the amount of power the CPU/GPU can use? If the Audio is being pushed, drawing more power, will that affect the power to the CPU/GPU? If the Audio is 'relatively' simple and the Audio processor isn't drawing power for the full frame time, it could theoretically use that power to boost CPU/GPU. It seems like the APU has a 'power draw' cap - it can't draw 'more' to boost ALL areas to max so if one area is not drawing much, can that free up some more for CPU or GPU?

Smart Shift may not be so simple with independent components, different CPU/GPU combinations and better for APU builds and 'battery' powered devices - like mobiles, tablets and laptops more than for a desktop PC. In a portable battery powered device, it would extend the battery life because its only drawing the power it needs - not running the CPU/GPU flat out and spending some of that time doing 'nothing'. Its better to draw 'less' power, run slower and still get the frame out in the required frame time. Nothing is gained by getting the frame done in 10ms and then running at full speed doing nothing for 6.666ms when they can drop the speed, reduce the amount of power its using and still hit the 16.666ms frame time - thus given you a longer battery life on a charge.

The point I was trying to make is that Sony's approach is 'different' and can be an environmental and economical reason - something that may be a unique selling point when individual Carbon Footprints are more of a concern nowadays. The approach is just 'different' to the 'norm' we have come to expect over the years. It doesn't mean its 'necessarily' better or worse and no doubt may have its 'con's' too compared to the more traditional fixed frequencies but also has Pro's too. Added up of hundreds, maybe thousands of hours of gaming, you could 'save' money on your electricity bill comparatively and lets be honest here, these next gen consoles are MUCH thirstier compared to previous generations...

Untitled

We don't know what power consumption we will see but the PS5 has a 350w power consumption rating (340w for the discless) and quite a bit of the talk surrounding it has been about the power consumption, reducing idle power draw, reducing standby power draw and of course by running with variable frequencies so no (or less) 'power' is being wasted by running at full speed doing nothing.

Not to say its 'better' because of course it may have drawbacks - like 'not' being able to run the full APU at full speed (inc Audio for example) for prolonged periods. Much of the 'difference' between PS5 and Series X could well be just because of the different design - 52CU vs 36CU and the 'difference' that nearly 2TF's of RDNA 2.0 offers rather than because the PS5 isn't running at 2.23Ghz. It could very well be running at 2.23Ghz - when it matters/needs to. Even if it isn't running at 2.23Ghz the rest of the time, it may well be because the it would just be doing nothing for part of that time, waiting to start the next frame so as a gamer, you still wouldn't see ANY difference to the visuals or the frame rates. On Series X, that would be when the GPU is only running at 70% capacity. Its still running at full speed on a OG Xbox game even if its capped to 720/30 and completing the frame in 5ms - the rest of the 28.333ms, its doing 'nothing' waiting to start the next frame but still running at full speed.

Its just a 'new' approach from Sony and we may well see it catch on to PC builds in the future but right now, its likely to be used in more APU battery powered devices to prolong battery life and easier to integrate into an all in one APU because of compatibility between individual components...

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

Senua

If I’m not wrong then there’s a reason why more expensive GPUs surpass performance due to higher resolutions, it’s just not about mere 4K resolution, it’s about the additional textures, special effects, high res ray traced AO, GI, volumetric shadows, reflections, DLSS/DirectML etc. which will absolutely require more CU headroom. Some next-gen optimisations like mesh shading geometry engine, VRS Tier 2, Sampler Feedback are necessary to free much of that CU headroom.

Edited on by Senua

Senua

Senua

@BAMozzy “the rest of the 28.333ms, its doing 'nothing' waiting to start the next frame but still running at full speed.”
You might want to see this.

A knowledgeable guy and dev has also confirmed me that this is also how DLI works or atleast aims to solve.

Edited on by Senua

Senua

BAMozzy

@Senua More CU's mean more shaders which means more simultaneous instructions per cycle. The PS5 has 36CU's which equates to 2304 shaders. Compared to the 52CU's or 3328 shaders, that's quite a difference. The PS5 has the same number of shaders as the PS4 Pro but obviously running significantly faster - many more cycles per second and of course you have the Efficiency and lower latency improvements that RDNA2.0 has - as well as other customisation improvements too but even with all that, the GPU is still only 2304 shaders. Having over a 1000 more shaders, 44% more, means that every 'cycle', the Series X is able to handle more instructions.

Whilst it doesn't necessarily work like this, Series X for example could dedicate 12 Cores to RT for example and still have 40 cores left for doing all the traditional rasterisation. An AMD RX5700 XT has 40CU's which can be overclocked (yes overclocked) to 1825Mhz to give a 'similar' (albeit RDNA1.0) profile to what the Series X would have just for normal rasterisation and, similar to 'nVidia' with its separate Tensor Cores, have 12 Cores left to dedicate to RT.

Things like VRS, DLSS, CB Renderings etc do add to the frame time but overall, save frame time. For example: At native 4k, lets say a GPU takes 20ms to render, at 1080p it takes 10ms, and 1440p 14ms. CB rendering would save time because its only rendering half the image - say save 5ms, but then adds 1ms to compile previous images. DLSS may add a 1ms or 2 so at 1080p with DLSS, its 12ms and 1440p becomes 15ms as its got 'more' information to work with. In all of these, its a saving on 20ms native. VRS may add an extra 1ms in the process, but when it comes to rendering, the image is like a range of resolutions. Some areas are like 1080p (a block of 4 4k pixels treated as a 'single' unit) and others are half 4k (a 2x1 or 1x2 block) so now you are not seeing a 'native' 4k as such, but a composite. The Sky for example could be treated more like 1080p and the edges too, with the middle of the screen at 4k.

Mesh Shading is more about LoDs and of course can save time in context and significantly increase it too over previous options. Devs create a very high polygon model and use that to create textures and lower polygon versions. What Mesh Shading can do is allow devs to use just that High Polygon version and 'automatically' scale it down in the game based on LoD settings, resolution output etc. If you have a small object because its in the distance, at 4k taking up 240pixels, it doesn't need 'millions' of triangles as there isn't enough pixels to render that - it simplifies it down - at 1080p, you only have 60pixels. Devs can also set the distance at which level of simplification you get - the Asteroids demo is an example of that where an Asteroid could be 'billions' of triangles but in the far distance, its only made of 4. As you get closer, it increases the complexity and polygon count. The Demo keeps an 'Actual' Polygon count relatively stable despite the massive polygon count the objects would have IF mesh shading wasn't being used. Games today do something 'similar' but that has to be done manually by devs who create many versions of that asset with varying degrees of complexity and swap them in at points as you get closer. A face may be just a texture in the distance - no real geometry to it at all.

Point is, its all about frame time and certain things are NOT just Hardware. VRS doesn't 'need' hardware and can be done in software, RT can be done in Software, Mesh Shading too but it can be very expensive and/or just impossible to get a playable game.

DLI is a twofold thing with MS's design. They have looked at the pipeline and reduced latency throughout where they can but also reduced the time it takes to get the players input from the controller. By automatically asking for the input 'before' the start of the next frame, its received much earlier by the system so the CPU has that information earlier so its in the pipeline much earlier - coupled with higher frame rates, which means the game is effectively processing more 'inputs' per second too, you get lower latency.

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

BAMozzy

Untitled

Not that I want to get involved in a fanboy war, I just thought this was funny....

Edited on by BAMozzy

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

Ryall

@BAMozzy The slightest earthquake and that Xbox is going to go sliding off.

Ryall

BAMozzy

@Ryall It certainly would - I just thought the image was funny - not meant to be taken seriously or looked at from a practical perspective LOL

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

BAMozzy

@Senua That was incredibly interesting and informative...

May have to get an external SSD via USB now instead of the external mechanical I have on my XB1X. O never thought the 'slight' benefits of an external SSD offered enough of a reason to spend 'silly' money per TB for - but now I see that was more down to the 'poor' Jaguar CPU. The difference is staggering!

A pessimist is just an optimist with experience!

Why can't life be like gaming? Why can't I restart from an earlier checkpoint??

Feel free to add me but please send a message so I know where you know me from...

Xbox Gamertag: bamozzy

Senua

@BAMozzy Imagine the smooth experience in games built on XVA which according to Microsoft, the custom decompression block offloads work of 2-4 Zen 2 cores.

Senua

Farmboy74

I read the Digital Foundry article and by all accounts the Sata SSD is the way to go regarding external hard drives when playing X one, 360 and OG Xbox games

Farmboy74

Please login or sign up to reply to this topic