I want this to be a proper discussion not the usually gamer rubbish that wouldnt know the difference between a CPU and a cracker that only spouts out the same stuff without knowing a single thing about any of it.
Ive bin thinking about recently with the state of x1 vs PS4 1080p 60fps stuff is why the gap can be biggish at times. All people spout out is better GPU and GDDR5 thats all my mate says at work but if i put a few mints in his hand and said theres a bunch of GDDR5 chips looks cool dunnit he'd believe me(exaggeration obviously) but solidifies the truth that a lot of gamers who spout these facts hav'nt got a clue about the tech inside either system.
Sonys is vastly different to MS's approach, there is a straightforward HSA implementation using fairly standard APU from AMD although the GPU has bin enhanced cus you dont usually get all the extra ROP's and stuff on a GPU that has 1152 cores on it, also theres system architecture tweaks which i reckon AMD helped alot with seeing as its in AMD's interest to use sonys money to guinea pig a HSA implementation and to tweak anything for there own chips which are coming out this year and should be fairly simple to code a game to id imagine. MS, well the jurys out whether this was the system architecture from the start or if its because they didnt predict that GDDR5 would be economically viable so they decided to go a different route and work around it possibly making the architecture a bit of a nightmare to code for/get the best out of.
What i want to see is figures on bandwith throughput of both consoles memory subsystems (its thats right word) want to see throughtput between the ddr3 to controller to GPU or to esram to GPU (theres a 1000GB+/sec memory link 1024bit for all the eSRAM caches, 4x 256bit linked to 1x 1024bit link being pumped back into the GPU/framebuffer ) thats got to be used for something or why bother or maybe its there to ensure theres no bottleneck. Without knowing this its impossible to tell if either console will improve much. I can imagine the X1 being not that optimised internally considering it was supposed to be released 2014 not November 2013 and with the changes to DRM they had alot of work cut out for them. Also i think MS might be relying to heavily on Tile Based Rendering and PRT which is what the system seems to be based around which is why they added it to dx 11.2.
So in short is there anyway to see whats going on in the memory in terms of how saturated its getting so we could see if theres room on a hardware level to improve?? of course the OS, drivers etc need to be optimised well cus we all know how much of a difference there is between a bad driver and a good one in performance terms. maybe devkits can see? of course the game needs be optimised as well to but in terms of the X1 definately this would be good as it seems alot harder to get the best out of the X1 with its architecture. On a personal note whether the 2 consoles are alot weaker than PC's doesnt matter, the technology and architecture inside is quite incredible and a big technical leap.
If this post is full of rubbish and doesnt seem relevant just say and ill get it closed cus i dont want it to turn into another X1 vs PS4 framerate, resolution flame war cus its pointless without knowing these facts.