Independent developer Tomas Sala has been talking to Xbox Wire about his upcoming Xbox Series X|S launch title The Falconeer, discussing the benefits of working on the next-gen hardware.
In addition to pointing out the many visual and performance advantages brought about by the new consoles, Sala also highlighted that their target of 60 frames per-second is "a must" for next generation games:
"In my opinion I believe 60fps should be a foundational feature of the next generation. As a maker I don’t accept that 30fps is sufficient for action games with high-intensity gameplay, like you experience during dogfights in The Falconeer.
Achieving 60fps on previous generations required compromises and sacrifices you won’t have to make next generation – sacrifices to AI, to visuals, to background simulation, to the scale of your open world, to levels of detail… 60fps is a must, and a huge benefit to players."
As for the experience of developing for the Xbox Series X, Sala praised how "effortless" it has been, noting that the CPUs and GPUs in the new consoles allow developers to simulate reality in ways they weren’t able to before.
It's definitely worth giving the full interview a read if you're interested in the behind-the-scenes of developing for Xbox Series X|S, and you'll be able to check out The Falconeer when it arrives on all current Xbox consoles this November.
Do you think 60fps is critical to the Xbox Series X|S? Give us your thoughts below.
[source news.xbox.com]
Comments 54
60fps is absolutely necessary and nonnegotiable for real gaming experience otherwise it’s just like playing interactive movies. Great to see consoles finally catching up again.
Most of Sony's games have been shown at 30fps and some with two modes, performance and resolution, just like PS4 Pro and Xbox One X. It is not a priority or not possible on PS5 to provide next-gen graphics at 60fps because of the variable and less powerful CPU and GPU. That's why they have the quality and performance modes.
Easy to get 60fps, but at what cost. We shall see. I often prefer playing certain games at 30fps but have more impressive graphics. Other games I prefer 60fps. Having a choice would be good.
@Senua "60fps is absolutely necessary and nonnegotiable for real gaming experience otherwise it’s just like playing interactive movies." - What a load of BS.
Ocarina of Time ran at 20fps (17 on PAL) was it not a "real gaming experience"???
A little closer to home - Halo 3 aimed to run at 30fps on Xbox 360 but often fell short, was that not a "real gaming experience" either?
I get what you are trying to say. Yes 60fps is better and preferred in many, but not all, types of games but it isn''t absolutely necessary or non-negotiable.
I'd rather play Halo 3 at sub-30fps than Halo 5 at 60fps.
Gameplay is FAR more important than any framerate.
60 frames per second certainly improve the experience in fighting and rhythm games. However there are some times when you need longer than that to draw the frame you wish to convey your vision. Games like the Medium are 30 frames per second because they’re trying to do something difficult but can’t be done in 16 ms. Personally I think it’s better to have the new experiences rather than tying everyone to an arbitrary performance target.
@AJDarkstar “I don’t recall having seen MS even show off their “25 TF of RT” yet.” - then maybe you should back and check Minecraft RT full path tracing demo running on Series X which Microsoft has demoed earlier this year. Same goes for Watchdogs Legion, Bright Memory Infinite etc.
“If anything, the PS5’s GPU is more geared towards 60fps, given the 2.23GHz clock (it may have 36 CUs to XSX’s 56, but a game that uses fewer CUs can theoretically run that much faster on Sony’s machine. ” - No, no just no that’s not how physics works. There’s a reason why 2070ti is still weaker than 2080 even though the former is overclocked. If anything, PS5 GPU needs to downclock to 9.2TF for more CPU heavy 60fps scenarios due to AMD power shift (evidence : Demon’s Souls 1440p60/ DMC5 1080p60). It’s one of the reason why Xbox hardware team has explained they have gone a more stable locked consistent predictable performance profile without using AMD powershare bottleneck.
Assassin's Creed: Valhalla runs at 60FPS on XBSX and 30 on the PS5, and there's a reason for that.
YES, absolutely. Make it a focus. Long overdue.
Once you get used to 60, it’s hard to go back to 30. A lot of early 3D games are unplayable now because of them putting out 20ishFPS Slide shows. If 60 isn’t a Focus this Gen I’m getting a PC.
@themightyant 60fps has been the default yardstick for a long time in PC gaming. I also recall enjoying some really old console competitive fighting games running at 60fps on CRT TVs. 60fps should be default for any high octane gameplay scenarios and I’m happy consoles are catching up. From now on every console and PC hardware performance capabilities should be judged keeping 60fps as a common denominator. If some people still prefer the “filmic” 30fps mode (which is absolute BS) then there should be an option for that too.
@electrolite77 Yeah someone gets it. It seems like people has suddenly stopped demanding better and now obediently following every false anti-logic narratives and excuses being thrown at them like sheeps.
@Darthroseman not sure where you got that from. Valhalla runs 60fps on both. A developer apparently stated the performance of both consoles is basically identical.
https://www.pushsquare.com/news/2020/09/looks_like_assassins_creed_valhalla_will_be_60fps_4k_on_ps5
I can easily switch between 30fps and 60fps. It takes a few minutes to readjust but that’s it.
It's been the standard for nearly 50 years, consoles have just recently failed to hit it the last 10-15. Glad to see it coming back, even if support is patchu so far. Although 60fps as a performance mode only is still a failure of both hardware and game design.
@AJDarkstar "but a game that uses fewer CUs can theoretically run that much faster on Sony’s machine".
No, that's what Sony said but it's not true when the maximum graphical power of PS5 (9.2-10.28 TFLOPS) is much lower than Series X (sustained 12.1472 TFLOPS) and the CPU and GPU memory of PS5 are slower.
@graysoncharles You say that jokingly, but I've been playing almost every game 1440p/120fps for 6 years now and I cringe going back to 60.
@Senua I always laugh when companies say "filmic" and "cinematic" when try to damage control their frame rate. Even more so when tje fanboys believe it.
@tatsumi "I can easily switch between 30fps and 60fps. It takes a few minutes to readjust but that’s it."
Exactly. Some people seem to be much more sensitive to this, i'm glad i'm not one of them.
I find the same switching between controllers. Off set sticks on Xbox to PS is immediate, I don't even have to think about it.
The only one that gets me for a while is Xbox to Switch. Literally reversing the button order AB to BA and XY to YX and left bottom (A on Switch) being confirm and right bottom (A on Xbox) also being confirm! and vice-versa for cancel. It takes a little time to re-acclimatise or you can remap buttons!
@graysoncharles I agree performance, including framerate, is part of gameplay.
My point was i'd rather play Halo 3 than Halo 5. Framerate is a secondary consideration and absolutely NOT "non-negotiable as a real gaming experience" as the OP stated.
Of course as @Medic_Alert mentioned i'd rather play at higher than 30fps if feasible, but at the end of the day i'd always rather play a better game at (almost) any frame rate. e.g. Ocarina at 17fps.
I also accept we are all different and some people seem to be more sensitive on this.
30fps has been 'sufficient' for MANY games and if you look at the Game of the Year winners, the vast majority of the winners are 30fps games.
60fps is great and plays well - improves input lag and has benefits too but to say its 'essential' - a necessity is wrong. For certain game styles, like Fighting or twitch shooters, then 60fps maybe is the minimum required but 30fps can still provide an excellent gaming experience.
There is no way that games will ALL be 60fps. They may have 60fps modes with some compromises - ie no RT, reduced Visual Quality etc because the GPU has much less time to render that frame.
As for 'speed' of a GPU, the faster clock speed may allow for certain processes to be a bit faster but that doesn't make a much smaller GPU with overall much less graphical performance suddenly equate to a much larger GPU. The 'extra; speed could mean that a Ray traced 'ray' could bounce 1 more time per frame because the GPU is faster but overall, with much fewer shaders, it will have much less 'rays' to trace - requiring much more work to de-noise the image. A PS5 has ~2/3rds the shaders meaning it can only process 2/3rds of the instructions but can do those instructions a bit quicker. However, the Series X can process a LOT more instructions per second - meaning more processing per '16.6ms' - All things being 'equal'. The equal thing though is going to be difficult because of different API's, customisation etc.
Lets not forget that the Series X isn't running slow - its faster than a LOT of stock PC GPU's and even some 'overclocks' too. When you look at GPU series, like the RTX2060, 2070, 2080 etc, the main difference is Cores with the top of the range models having more cores - bigger GPU's. GPU manufacturers make 1 or 2 chips and then disable cores (for yield) for lower spec versions of the series. The GPU's with most cores active, the largest, are the most 'powerful' and doesn't matter if you overclock one that's 2/3rds the size, you won't suddenly get the same game performance metrics
@graysoncharles What we've seen is backwards compatible games loading times on Series X without using next-gen hardware features nor any kind of code optimisation. Sony haven't shown real loading times yet so we still have to see how much different the faster SSD experience is considering Xbox Velocity Architecture for optimised games and if there are (or not) any bottlenecks on PS5.
Perhaps doesn't need to be essential, but I would very much like it to be the norm, especially in fast-paced action games (and competitive multiplayer titles).
@AJDarkstar No, you don't understand how it works, it's not just TFLOPs but CPU and GPU but I won't get into a loop with you. If you want to believe in Sony's fairy tale, it's up to you.
@Medic_Alert Scaling games is the key, indeed. Xbox development kits are designed for that with a lot automatic optimisation, turning down sliders, perhaps disable the most demanding graphical features, turning on and off ray-tracing like in the Minecraft demo, importing Windows 4K textures, etc.
Because some PS5 first-party games are going to be released for PS4 and PC, they can't be built exclusively for the PS5 unique hardware like the faster SSD. Most third-party games won't be designed for it either nor for the controller's oddities like the touch pad or the motion controls.
@AJDarkstar Watch Digital Foundry who did a video on Minecraft RT running on an ACTUAL Series X console - although that was 'early', it was still running on a Series X.
Whether it comes to console or not - especially with the Series S in the line-up, we will have to see. That fully path traced Minecraft looks like it would really struggle at 1080p on a Series S so I don't know how that would sit if Series X gets RT and Series S doesn't.
But it has been seen running on a Series X by 'independent' sources.
@Medic_Alert Same goes for resolution. I'd rather play Breath of the Wild at 900p than most other games at 4K.
I like innovation, hell my main reason for buying both next-gen consoles at launch is to experience my existing titles at better framerates, resolutions and with extra bells and whistles... it's all nice to have... but it's in no way essential.
I just wish we tried to innovate gameplay as quickly as we innovate specs. This constant pursuit of higher numbers, while nice, isn't the be all and end all.
I loved God of War on PS4 PRO in resolution mode. It was eye-poppingly gorgeous. When I switched to 1080p and 60 fps I thought I was playing a different game. I'll gladly take better graphics than fps if it makes a big difference in the graphics department. One exception: racing games. WRC8 and 9 are a pain to play at 30fps (ditto for Assetto Corsa Competizione).
better headline: “developer thinks others should do what he did”
Seriously please gaming sites stop amplifying the nonsense bro culture (esp you IGN) in gaming. It’s a computer. Devs will do whatever they want with it. You will still have 10fps games on XSX and horrible bugs.
@Medic_Alert Don't tease me!
I'm hoping, and think it's more likely, the long rumoured Switch Pro might have 1440p DLSS, over native 4K for BotW2... and even more hopefully BotW will have a (paid) patch.
Though none of that sounds very Nintendo, i'm hoping the vast gap between next-gen systems and Switch will force their hand a little if they want to be able to have cut down versions on Switch Pro.
@AJDarkstar I didn't say that PS5 is not able to do 60fps, even Wii U can do that. You are the one talking "fanboy" nonsense suggesting that a less powerful CPU and GPU (using the same architecture) and slower GPU memory is able to obtain better graphical results than Series X. Thus, if you write that recurrent forum word, "fanboy" in the middle of a pedant yet wrong comment, you make it sound like you are not just deluded but a fanboy yourself or just using the word as a cheap argument.
Is jt just me who whilst playing games doesn't even take notice of frame rate...
If I'm really into a game then I couldn't care if it was 30fps or 60fps, I'd be enjoying it too much to nocitce
@Valhalla91 No, It's not just you
I only notice it when it's really inconsistent.
It seems others are a little more sensitive to it.
@AJDarkstar Also, TF's don't matter - until you start comparing the same generation GPU's from the same Manufacturer. That's why you can compare directly the TF's of the RTX3000 OR the RDNA2.0 GPU's together but can't compare TF's between nvidia and AMD.
With the consoles, they are supposedly running the same GPU and CPU architecture - much like the Xbox One and PS4. Of course there are still 'differences' due to some customisation and API - the PS5 won't be using DirectX RT or Direct X ML for example and won't have Sampler Feedback either.
You have differences in the approach to Data throughput too with Sony deciding to go with as much bandwidth and channels as possible but MS opting to look at ways of reducing the amount of Data needed - Such as only streaming a part of the textures visible instead of the whole texture file, using ML to upscale to high res so reduce the size of texture files etc. It could (if devs utilise it) mean that Sony can download 10GB of data superfast but MS only needs to download 3GB of data by comparison and so 'even' out.
What is true though is that the GPU's still have a massive difference in shader count - almost the same % difference the PS4 had over the XB1. Each shader can handle 2 instructions per cycle so on each 'cycle', the Series X can handle vastly more instructions.
Its like having 3000 printers print out a newsletter at 60 sheets per min compared to having just 2000 printing out 70sheets per min. Over a minute, the 3000 will print out 180k sheets compared to just 140k with 2000 sheets. That's kind of how TF's are calculated - the theoretical maximum number of instructions per second a GPU can do and whilst the difference doesn't seem much when talking about 'TF's' between 10.2 and 12.1, the PS4 was 1.8TF and, as we know, RDNA 2.0 is a LOT more efficient per TF too so that 'difference' is quite significant.
Like I said, you have 2304 shaders in a PS5 compared to 3328 shaders in a Series X meaning it can handle a LOT more instructions per cycle, maybe fewer cycles per frame, but overall, that's nearly 2trillion (yes trillion) more floating point operations per second possible on a Series X GPU.
With Cores also being used for hardware accelerated RT, having more cores to 'free' up for RT means that they can keep a lot of cores for the traditional rendering pipeline too.
The differences in the customisation and API's may help - if DirectX is a bit more bloated for example so less efficient - so its not as black and white as it may seem - and of course you have to have devs optimising the game as well on BOTH hardware to get a 'clearer' idea of the difference the GPU alone makes but the fact is both are AMD, both using RDNA2.0 processors so should be comparable in terms of TF performance - unless other factors (like API's, Customisation etc) skew results slightly...
@AJDarkstar
1. It WAS running on Xbox Series X and was using full path tracing not some checkerboard sub-res pseudo Ray Tracing with mostly rasterisation. Even cards like 3080 struggles to do full path tracing on old games like Quake, Minecraft at ultra performance.
Digitalfoundry -
https://youtu.be/agUPN3R2ckM
Rather than listening some youtube influencers, try to learn a thing or two from actual veterans of gaming industry and computer graphics subject as a whole. And no, games are not possible to build just for “low CU counts”, that’s not how vector graphics algorithms work. If that’s the case then it will never run on cross gen and PC hardware. That’s the most idiotic logic I have heard from anyone.
2. Multi-platform devs have also claimed they were able to run more special effects and fidelity at higher performance profile easily on Series X.
3. It is a HARSH FACT that due to AMD powershare bottleneck, GPU needs to be severely downclocked in order to provide sufficient CPU power for 60fps (evidence : Demon’s Souls 1440p60/ DMC5 1080p60). Even the X1X can do native 4K30FPS on rich open world titles like RDR2 but it’s CPU bottleneck which needs to be countered to achieve 60fps. Computer vector graphics algorithms are highly scaled parallely, which is exactly why that massive compute units difference plus other RT/FP8(ML) units that come with it will absolutely matter to process higher fidelity rich and dense environments and generate frames especially under low latency CPU heavy situations which will only last for 16.7ms for 60fps and 8.33ms for 120fps. This is exactly why even the higher tier-ed RDNA2 and Nvidia cards will always offer much more CU counts. It is just practically impossible to do vector processing without large number of parallel worker cores. This is exactly reason why GPUs exist in the first place, somethings are just not mathematically possible to achieve even if you hit 5Ghz of clock frequency but don’t have sufficient parallel processors.
@Darthroseman Pretty sure AC Valhalla runs at 4k 60 on both PS5 and Series X?
@AJDarkstar "@Senua nice of you to make assumptions on where I get my information, tells me everything I need to know about you. Enjoy your ignorance".
The only ignorant here is you. Not just ignorant but presumptuous and unaware of his own ignorance.
@Alec1992 You're correct, I was going off old information. I will still say it is concerning to me they are already conceding they cannot do 4K 60FPS on Miles, Demon's Souls, and Ratchet and Clank, while Xbox is very much saying they are targetting 4K and at least 60FPS. The PS5 and Xbox will both sell well and be fine systems, but the PS5 as a piece of hardware has many more red flags for me. There's a lot of, "This is a holdover to run these until the PS5 Pro comes out and is actually the system we should have put out in the first place." vibes.
@AJDarkstar From a GPU perspective - the PS3 wasn't more powerful but had the CPU to help in certain tasks. The RAM too was split into two 256MB pools compared to 512MB unified RAM and the 'complex' CPU was a problem for all but devs coding specifically for it. The CPU was 'supposed' to be the area that would make the PS3 more powerful overall but in terms of Gflops, Xbox had the more powerful GPU.
The CPU of the Xbox was a more 'general' 3 core design where as the PS3 only had 1 core for 'general' CPU processes and lots of other specialised cores that could be used to boost Graphical performance - IF the game was developed to use them from the start. If a game required more than 256GB for 'Graphics', it was easy on the 360 but a pain to compress all that down to the bank of RAM for the GPU in the PS3.
This is why games built for multi-platform, ones that scale across PC's with 'General' core CPU's, Unified banks of RAM etc were so much easier to port to XB360 and why its so difficult for Backwards Compatibility to PS3. GPU is only 'part' of a system and the more powerful CPU in a PS3 was so difficult to utilise as well as being limited by RAM as it was Split is why the 360 ran games better.
You have to be careful when you say 'more' powerful because in that case, the GPU wasn't - and the CPU, whilst more powerful, was more restrictive and difficult for games that relied on more general cores rather than specialised that were 'difficult' to utilise.
@BAMozzy Microsoft have made a lot of noise about their decompression technology but the PlayStation five will have something similar. Oodle, The makers of the kraken technology for PS5 uses for its decompression, are working on Oodle texture which will basically function the same as The series consoles texture streaming technology. https://www.notebookcheck.net/PlayStation-5-I-O-unit-gets-seriously-souped-up-to-a-potential-17-38-GB-s-bandwidth-thanks-to-potent-combination-of-Oodle-Kraken-and-Oodle-Texture.496015.0.html
The Xbox series X with its large number of compute units at standard clocks is going to need to run at quite a high resolution to be fully utilised. It’ll certainly be the best performing console at 4K. But at lower resolutions (say 1080p) It would be easier to allocate work to the PlayStation five is fewer faster computer units and so it may perform better.
@graysoncharles exactly. If getting a certain fps is a red line condition they should be on PC where they have control over that. Otherwise, everyone should just focus on what games they are looking forward to. And I would like to see mods and gaming websites using their power to make the community less toxic. It’s so weird that for 30+ years there are these nasty pervasive screeds about which system is more powerful. Is it just to get clicks to run ads? Why is this acceptable? Can you imagine how many people leave your site and never return because of hostile ***** like this?
(Aside: Can you imagine a drawing a line in the sand on any previous generation and saying if it can’t run 60fps or do trilinear filtering or pick x graphical feature then that game or system shouldn’t exist? That is insane. That rules out almost every game and every system ever made, period. Yet here we are still in 2020).
@Darthroseman I guess the PS5 pro would be theoretically possible. As the GPU would still be smaller than the largest rumoured RDNA 2 PC graphics card. Whereas if you double the Xbox series X GPU you’d be significantly bigger than anything available on PC. But I think it’s highly unlikely. At the moment the base PS4 is selling about five times as much as the PS4 pro so I just don’t see the commercial incentive. (Based on last week Japanese sales charts)
60 FPS should be the standard on Series X and PS5, There's no reason why any game is not 60 FPS on these systems..
@Ryall But Series S has even less CU counts. It’s not possible to build games for just low CU counts, that’s not how vector scalable algorithms work. There’s a reason why more expensive GPUs surpass performance due to higher CU count, it’s just not about mere 4K resolution, it’s about the additional textures, special effects, high res ray traced AO, GI, volumetric shadows, DLSS/DirectML etc. which will absolutely require more CU headroom. Some next-gen optimisations like mesh shading geometry engine, VRS Tier 2, Sampler Feedback are necessary to free much of CU headroom.
@Ryall I am not saying that both don't have Decompression technology that will seriously help with transferring data but Sony's method is relying purely on speed of data transfer (compressed or otherwise) where as MS are also looking at reducing the 'amount' of Data needed to be transferred in several ways.
One way is by only transferring parts of a texture file that are being used and only visible by the player. That means that some textures are not needed because they are not currently in view and other textures, they only need a 'part' of that file because the rest is not being used or visible - thus reducing the amount of data it 'needs' to transfer - this is the Sampler Feedback process.
Another aspect is by using AI upscaling - taking a 'lower' res texture and upscaling it to higher res - a bit like DLSS is doing with the 'whole' image. Therefore, the texture files themselves can be a lot smaller. Essentially its like having a 1080p image or 4k image to transfer over and if you can make the 1080p image look 4k with AI upscaling, its a bog saving on the amount of data you need to transfer.
An image could have hundreds (if not 1000's) of texture files to stream across and if they are ALL high Res and ALL streamed across, you need much greater bandwidth - whether they are compressed or not compared to only streaming in lower quality textures that are 'visible' and then using AI to upscale to high res versions.
Because you are reducing the amount of files needed and reducing the size of each file too by reducing the quality, you are reducing the amount of bandwidth you need to transfer to render that seem image. If you reduce say 10GB of 'compressed' data to say 3GB of Compressed Data then the 'speed' advantage Sony has is reduced. It still has to transfer 10GB of data so needs more bandwidth than MS who only needs to transfer 3GB for example. Both will do that in under a second.
The advantage that Sony has is that it has that speed advantage regardless whilst MS would rely on Developers utilising Sampler Feedback and/or DirectML.
As for the GPU size, you don't see PC GPU's having 'issues' with games because their top of the line GPU's have a LOT more cores/shaders. The issues tend to come more with CPU's and the way they assign work to a certain core/thread. Its what screws up Crysis because that was built on the principal that CPU cores would incrementally get faster and faster and faster - however, what happened was that CPU's got more and more cores and multi-threading. Too much is put on a 'single' core that requires much faster performance to process the amount of data put on it - instead of spreading that workload across multiple cores.
Also with RT using cores, the Xbox can use many more cores and still have more cores left for the traditional rendering. You can use 12 cores for example on just RT and the Series X still has 40 cores running faster than an RX 5700 XT overclocked. That is 40 cores overclocked to about 1750 (although the 50th Anniversary can be overclocked to 1980 I believe) but point is, size matters and the top of the range RDNA2.0 GPU is expected to have 80 cores - that will NOT be hampered by it size...
Woot, finally an article with a discussion on PXB!
I really don't think this is going to happen. Devs are going to prioritize flashy and marketable graphics advancements, just as they have always done, and it will bog down the hardware and keep framerates bogged into the 20's just as they have always done. 3080s are out now. vfx are going to target the max of what's possible. Consoles will still have to choose to max out vfx or fps.....same as always. That cycle will never end.
We see it with Spiderman and Souls on PS5, but I don't believe that's going to be a PS5 design limitation. It may be "worse" on there, but I think it's just going to be more of the same this generation. Raytracing graphics mode with 4k and reflections and ray tracing, or compromise 60fps mode with reduced details and no raytracing. In that sense, 9th gen hardware is simply the best place to play 8th gen games, or preview prerelease versions of 10th gen games.
@themightyant So much agreement on the ABXY swap. I hate hate hate that. That's actually Sega's fault. Microsoft just copied the Dreamcast controller (they helped make the Dreamcast and Sega helped make the first XBox.) So every time you curse out the ABXY button reversal, just remember that you're still reliving the 90's console war and Sega's still doing what Nintendon't. Hitting "confirm" when you wanted to hit "cancel"? THAT is Blast Processing!
It also messes up Phil Spencer....there was that fun time he was playing Super Lucky at the Nintendo both and kept dying because of the button swap and told them they have their buttons mapped wrong.
@AJDarkstar
PS3 230.4GFlops Xbox 360 240Gflops
Graphics card on the XBox 360 is based on an AMD design, and the system has 48 unified shaders, running at 500MHZ. THis gives 96 billion shader operations per second on the XBox 360, compared to that of the 74.8 billion shader operations that the PS3 managers, with its 24 pixel shader pipelines and 8 vertex shader pipelines running at 550MHZ per second. In terms of raw performance, the PS3 loses slightly right out of the gate.
The Xbox 360 has 10MB of EDRAM which is tied directly onto the GPU. This EDRAM acts as a very fast buffer allowing regularly accessed code to be quickly grabbed for processing. As it turns out, it also means that AA (Anti Aliasing) a lot easier on Microsoft’s console too.
The XBox 360 isn’t just having an easier time of it because of the EDRAM and the higher amount of Shader Operations though, it does have a small on chip design advantage with AA. The PS3 GPU is based on the G7x series (from Nvidia) and this in turn means that the ROPs are capable of handling 2 multisamples per pixel per cycle. AMD’s Xenos (XB360) is different – Its ROPs are designed to handle 4 multisamples per pixel per cycle. All fill rates are full speed, and the Z-only fill rate is also double pumped. This in combined with the EDRAM gives the Xbox 360 GPU an edge.
I’m just quite disappointed that we most likely won’t be seeing games in native 4K 60fps with ray-tracing. It seems that it will be either or, like last gen choosing between either performance and resolution modes. Games with ray-tracing enabled will most probably have reduced visual quality. You’d think with the X1X being able to display both 4K 60fps, that the Series X could add ray-tracing to it without compromising on visual quality or frame-rate.
@gollumb82 I think the big thing is games like GoW are interactive movies with mediocre gameplay. It's about "experiencing" the world. When you drop the gfx and turn it to 60fps you're playing it as a raw video game rather than an immersive experience. And as a raw video game, it's really not that great. Stiff controls, dated mechanics. It plays like a PS2 game because the design is basically that of a PS2 game. The "experience" depends on the visuals.
That's not to say it's a bad product as a result. There's absolutely a place for "interactive entertainment" to be something other than being "video games", I think. But that's one of many cases where, when you remove the immersive and breathtaking environment, you're left with exposing the video game portion of the product only. And the video game portion isn't really that great by video game standards. OTOH, take Splatoon and run it at 30fps with ray-traced ink. It would look gorgeous in still shots. But it would be entirely un-fun to play.
has anyone here coded a ray tracing engine? Do you understand what you are asking for when you say you want or expect 4k 60fps raytracing at a sub $500 price point? I mean you can have it, if you are ok just playing tabletop pool simulators... I don’t think you know what you are saying.
@koffing Exactly. This is why we only see full path tracing on games like Quake and Full ray tracing demos on things like Minecraft. However ray traced audio, shadows or reflections are cheaper to perform and we could see some of these features in next gen consoles.
We're still a gen (or two) away from having fully ray traced titles more frequently.
@NEStalgia I couldn't disagree more.
God of War (2018) the GAMEPLAY and combat is fantastic and has hidden depth, nothing at all "like a PS2 game". It's so much more than graphics. Gameplay, sound, level design, story and acting all push the envelope, it's a pretty complete package and for me one of the defining games of the generation.
But of course you are entitled to your view.
@themightyant We may have a difference of opinion on the "depth" of battle, but the rest of what you say isn't a disagreement with me. It's a total immersive experience more than a video game. Sound, level design, story, acting, graphics etc go together, and when you reduce the immersion by reducing the graphics, you cancel out a significant part of that immersion. It runs better as a video game, but, even if you do feel the combat is a top class video game while I do not, there's no denying that you're cancelling a lot of what makes up the total package that defines it as an immersive experience, and thus why the 30fps compromise is ok. If they could patch it to 60fps WITH all t he graphics enabled, then, that would be another story entirely.
@NEStalgia I guess we'll just have to disagree on the combat. I've played through it 3 times now, twice getting absolutely everything and i'm still not bored of the combat. It's has more depth, tactics and nuance than most give it credit for particularly on harder difficulties. I wasn't alone, there were many articles praising the combat specifically when it came out. (one below)
More than anything it was just a LOT of fun.
But I understand again we all have different tastes. I couldn't get on with Devil May Cry 5's combat yet some think it's the high bar, other hate Dark Souls combat which I love. It's all subjective, there is no right or wrong.
https://www.gamespot.com/articles/ps4s-god-of-war-finally-fixes-the-series-tedious-c/1100-6457637/
@Darthroseman yeah I get what you're saying. I've got my eyes on a Series X over ps5, at least initially anyway. Fingers crossed they'll both be great and have some great games on both platforms. The most worrying thing about this generation is wondering how I'm going to afford a TV that supports 4K 120fps
@themightyant
+1 on the God of War combat. The sequel is my biggest lure for wanting a PS5.
@themightyant keep in mind i didn't say that i dislike the combat or the game. I like it. I just don't feel the combat represents a particularly special gameplay device when you see it laid bare without the trappings.
As for From software, yeah, can't stand their game design. Though I don't think the combat itself is bland after DS1, DS1 was painfully stiff, the combat is still the games, actually their only, selling point, i just can't stand the endless death repetition, absent narrative, empty and soulless (pun not intended) but admittedly gorgeous levels. Sekiro is a little better. But I still don't understand the obsession on the internet with From.
What about the 120fps it’s capable of?
Show Comments
Leave A Comment
Hold on there, you need to login to post a comment...