Cant wait to play the new Bethesda exclusives in 2025. Thank god I will have deathloop and ghost wire Tokyo to tide me over in the meantime. On my series x
Can’t imagine how to finish all these upcoming titles let alone the several unannounced ones. Colt has dropped an absolute banger with this one. 🔥
@BAMozzy This chart doesn’t take into account the AMD power shift bottleneck that will affect during CPU intensive workloads in low latency 60/120fps scenarios.
@AJDarkstar AGAIN you are factoring the system as a whole and NOT just the GPU.
The GPU of the PS3 maybe inferior, but the CPU (the Cell) helps offset some of this. The cell CPU in the Playstation 3 helps take some of the load off the GPU by farming it out to the SPU’s. These SPU’s can then handle various bits and bobs that the GPU cannot (due to not having the power to do so). Understanding how to maximize the Cell’s SPU is key to getting the best out of the console – although typically needs to be an exclusive title to really benefit. Certain titles such as Alan Wake for the XBox 360 did use the CPU for certain effects too however – but overall the console doesn’t rely on it to the degree that the PS3 does.
The GPU in the PS3 was not as powerful as the Xbox 360's but because of the design of the CPU, certain tasks could be offloaded to it and take up some of that 'slack' so to speak but in terms of 'RAW POWER' the Xbox 360 GPU was more powerful than the PS3.
A Console isn't 'just' the GPU but a whole system including CPU, RAM, Bandwidth etc.
A pessimist is just an optimist with experience!
Why can't life be like gaming? Why can't I restart from an earlier checkpoint??
Feel free to add me but please send a message so I know where you know me from...
@Senua That chart was just to illustrate the 'relative' Flops of consoles over the generations ONLY. That is ONLY looking at the GPU and its theoretical maximum Floating point operations per second and doesn't factor in efficiency gains either or latency problems that allows newer GPU's to get closer to their theoretical maximum.
FLOPs are calculated based on the number of shaders and the number of FP32 operations each can do per second and as the frequency is number of cycles it runs at per second, the 'speed' has a big impact on this.
However, NO GPU runs at 100% efficiency - in other words, there is cycles where a floating point operation is NOT being done - either due to latency (waiting for instruction) or efficiency (waiting on other operations to complete) so a 12TF Series X GPU cannot technically do 12 trillion Floating Point Operations per second and neither can the PS5 do 10.2 trillion Floating Point operations - its the theoretical MAX.
Improvements in architecture, scheduling, bandwidth etc all contribute to a GPU performing more Floating Point operations - which is why an older 4TF GPU won't perform as well as a newer 4TF GPU. Both can have the exact same shaders, the exact same frequency so will give the exact same theoretical TFlop rating but because there is less latency and better overall efficiency, they can actually do more Floating Point operations per second - less idle time, less waiting for instructions or data from other parts of the system.
Its why you can't directly compare the 'Flops' between AMD and nVidia and why you can't compare the Flops between older and new GPUs because the architecture and efficiency is 'different'. This is a theoretical 'maximum' but no GPU is 100% efficient. You can say the Series X is 2x as powerful (based on Theoretical maximums) but the reality is you would need more than 2x 6TF GCN GPU's to deliver the same performance. You may well need an 18TF GCN GPU to match the performance of a 12TF RDNA2.0 GPU and the next gen may well do the same with just 10TF because its more efficient, gets closer to being 100% efficient. Old GPU's may well be 50% efficient (in other words, a 10TF theoretical maximum can only do 5trillion floating point operations - so 5TF in real world performance and to get 10TF real world, would need a 20TF GPU) and a newer one may be 66.66% efficient (so 10TF delivers 7TF in real world performance and would need a 15TF GPU to get 10TF of 'real' world performance).
This is based on FP32 instructions too - if they could use FP16 entirely, the Series X has a theoretical 24.2TF maximum GPU.
That chart doesn't need to factor in AMD's Smart Shift - that doesn't affect the the theoretical maximum whatsoever. A GPU running static at 12TF doesn't mean you are getting 12TF ALL the time It can be sat idle doing NOTHING for half a frame, not processing any floating point operations at all. You often see with Digital Foundry analysis where the GPU is only running at 75% meaning its spending a lot of cycles doing nothing. Smart Shift is designed to use that 'idle' time to bump up CPU speeds and then when the CPU has sent instructions to the GPU to draw the frame, send the power to the GPU and can switch the power, speed up either CPU or GPU multiple times per frame - essentially making it more efficient with its power consumption. There will be frames that may only be using 70% of the capacity of the GPU because its running at full speed all the time and then for a few frames be at 99% because of lots of action and effects happening all at once. On PS5, that GPU could be running at 99-100% all the time, drop the speed so its 99% capacity to save power and reduce temperatures - boosting up when needed - even several times during a single frame. It happens in less than 2ms and a frame at 60fps is 16.6ms so it could boost the CPU to do all the physics, AI and draw calls to the GPU and then boost the GPU to render that image. you don't start rendering the image at the 'start' of the next frame as the CPU has to process any inputs from the controller, any AI, Physics, impact/collisions etc and then tell the GPU what to draw. All of that is what makes up a frame time. That's why dropping resolution below a certain point has NO effect on frame rate because the CPU tasks are still taking time and cannot reduce the frame time any further...
A pessimist is just an optimist with experience!
Why can't life be like gaming? Why can't I restart from an earlier checkpoint??
Feel free to add me but please send a message so I know where you know me from...
Sure, it will take a while but the Series X/S game library will definitely be mind-blowing. Sequels to beloved franchised, lots of promising new games, 23 first-party studios and the best console version of third-party games... It's incredible.
@Senua Those are the profiles from the early PS5 dev kits produced before Sony got smart-shift working. On final PS5 hardware and current dev kits shifts happen hundreds of times the second depending on demand rather than having a whole game with a fixed profile.
@Ryall Considering the tweet is dated very recently, I don’t know whom to believe until they atleast send the hardware for a final transparent Digitalfoundry breakdown analysis. But anyways this proves Xbox hardware team’s statement on how they have initially tested with AMD power shift but have later decided to go for a locked balanced consistent predictable performance profile instead of peak boosted numbers for ease of dev optimisation.
@Senua Digital foundry discussed the profile is used in early dev kit on April 2. The relevant part of the video is between 4:30 to 6:00 minutes. The tweet is recent but it’s quoting something the person was told on discord earlier. That said I doubt every developer replaces all their dev kit every time so Sony brings out a new one.
Forums
Topic: General Xbox Series X|S Thread
Posts 341 to 360 of 1,569
Please login or sign up to reply to this topic