If the machine is designed to only play games up to 1080p and movies at 4k, yes, gddr5 is wasted. There aren't enough pixels to justify the bandwidth/fill rate. 1080p/60 is what, ~3gbps and no game developer is wasting cycles redrawing every pixel of every scene but rather modeling textures and lighting already active in memory or on ROP
Doesn't make sense. The problems with games aren't really CPU/GPU, its the fact they're largely scripted, process driven and event base - you play one, you play them all. ONe of the announced features were dynamic maps and dynamic multiplayer so the worlds you play would be different each time you play them. This is possible because of the cloud. That's the kind of stuff that will make gaming fun if you ask me.
The graphics are already amazing, but again, come on, we're talking 1080p. HD is already said and done, we're talking about more interactivity, more personalized experiences, more interaction and more store. More WIN
I genuinely don't want to offend you or upset you, but you really do need to do a bit more research into console/PC architecture and how the memory speed is important to a unified system. To put it simply, the happy reaction games developers had to Sony using very fast memory speaks volumes, it is far from 'wasted'.