Jump to content



Photo

Orbis (PS4) devkit docs leak; 8GBRAM, 2.2GBVRAM, new Controller


  • Please log in to reply
36 replies to this topic

#16 TheLegendOfMart

TheLegendOfMart

    Neowinian Senior

  • Joined: 01-October 01
  • Location: England

Posted 24 January 2013 - 14:20

Unless the guy is going to fake a 90 page PDF file that goes into detail about the devkit then yes it is actually true. Specs aren't much different than the rumoured final spec for the console itself.


#17 Kriz

Kriz

    Neowinian Senior

  • Joined: 19-September 02
  • Location: Cardiff, Wales, UK
  • OS: Windows 8
  • Phone: iPhone 4S

Posted 24 January 2013 - 14:24

If it's anything like the PS3 dev kit it just looks like a VCR. :)


Just the look i want! :laugh:

#18 TheLegendOfMart

TheLegendOfMart

    Neowinian Senior

  • Joined: 01-October 01
  • Location: England

Posted 24 January 2013 - 14:27

Just the look i want! :laugh:

Posted Image
That's what the alleged Durango dev kit looks like, don't think its going to look much different than that.

#19 HawkMan

HawkMan

    Neowinian Senior

  • Tech Issues Solved: 4
  • Joined: 31-August 04
  • Location: Norway
  • Phone: Noka Lumia 1020

Posted 24 January 2013 - 14:28

If it's anything like the PS3 dev kit it just looks like a VCR. :)


Man I wish consumer consoles would look like that's, andthen would all stack nicely with proper sized AV components.

#20 OP Audioboxer

Audioboxer

    Hermit Arcana

  • Joined: 01-December 03
  • Location: UK, Scotland

Posted 24 January 2013 - 15:25

The devkit Kotaku posted is apparently an older one:

Currently, there are 3 types of devkits:

1) R10 boards with special BIOS, running in generic PC’s

2) “Initial 1″ — Early devkit

model number: DVKT-KS000K
SCE-provided PC equipped with R10XX board
Runs Orbis OS
Available July 2012

3) SoC Based Devkit: early version of the ORBIS hardware

Available January 2013


R10 Board (with special BIOS) assemble in a Generic PC

Requires Windows 7 64 bit edition
Recommend
Sandy Bridge (Intel) or Bulldozer (AMD)
Minimum 8 GB RAM (system memory)
650 Watt PSU
VS2010 SP1
DWM (Desktop Windows Manager) must be turned off
Application will use Windows services for everything except GPU interface
SCE will provide “Gnm”, a custom GPU interface


Do you remember the first Durango’s pictures? This is a very early devkit based on Windows.

DVKT-KS000K (“Initial 1″)

Runs Orbis OS
CPU: Bulldozer 8-core, 1.6 Ghz
Graphics Card: R10 with special BIOS
RAM: 8 GB (system memory)
BD Drive
HDD: 2.5 ” 160 GB
Network Controller
Custom South Bridge allows access to controller prototypes

Posted Image


SoC Based Devkit

Available January 2013
CPU: 8-core Jaguar
GPU: Liverpool GPU
RAM: unified 8 GB for devkit (4 GB for the retail console)
Subsystem: HDD, Network Controller, BD Drive, Bluetooth Controller, WLAN and HDMI (up to 1980×1080@3D)
Analog Outputs: Audio, Composite Video
Connection to Host: USB 3.0 (targeting over 200 MB/s),
ORBIS Dualshock
Dual Camera

The last devkit is the closer one to the retail console. Expect a machine with these specs or similar to these ones. Obviously, Sony could introduce changes in this features, but don’t expect deep mods.


Source: http://www.vgleaks.c...s-roadmaptypes/

Last one is the one we're interested in. Orbis Dualshock mentioned as well.

#21 OP Audioboxer

Audioboxer

    Hermit Arcana

  • Joined: 01-December 03
  • Location: UK, Scotland

Posted 24 January 2013 - 15:31

Interesting article from Timothy Lottes (creater of FXAA)

Working assuming the Eurogamer Article is mostly correct with the exception of maybe exact clocks, amount of memory, and number of enabled cores (all of which could easily change to adapt to yields).

While the last console generation is around 16x behind in performance from the current high-end single chip GPUs, this was a result of much easier process scaling and this was before reaching the power wall. Things might be much different this round, a fast console might be able to keep up much longer as scaling slows down. If Sony decided to bump up the PS4 GPU, that was a great move, and will help the platform live for a long time. If PS4 is around 2 Tflop/s, this is roughly half what a single GPU high-end PC has right now, which is probably a lot better than what most PC users have. If desktop goes to 4K displays this requires 4x the perf over 1080p, so if console maintains a 1080p target, perf/pixel might still remain good for consoles even as PC continues to scale.

The real reason to get excited about a PS4 is what Sony as a company does with the OS and system libraries as a platform, and what this enables 1st party studios to do, when they make PS4-only games. If PS4 has a real-time OS, with a libGCM style low level access to the GPU, then the PS4 1st party games will be years ahead of the PC simply because it opens up what is possible on the GPU. Note this won't happen right away on launch, but once developers tool up for the platform, this will be the case. As a PC guy who knows hardware to the metal, I spend most of my days in frustration knowing damn well what I could do with the hardware, but what I cannot do because Microsoft and IHVs wont provide low-level GPU access in PC APIs. One simple example, drawcalls on PC have easily 10x to 100x the overhead of a console with a libGCM style API.

Assuming a 7970M in the PS4, AMD has already released the hardware ISA docs to the public, so it is relatively easy to know what developers might have access to do on a PS4. Lets start with the basics known from PC. AMD's existing profiling tools support true async timer queries (where the timer results are written to a buffer on the GPU, then async read on the CPU). This enables the consistent profiling game developers require when optimizing code. AMD also provides tools for developers to view the output GPU assembly for compiled shaders, another must for console development. Now lets dive into what isn't provided on PC but what can be found in AMD's GCN ISA docs,

Dual Asynchronous Compute Engines (ACE) :: Specifically "parallel operation with graphics and fast switching between task submissions" and "support of OCL 1.2 device partitioning". Sounds like at a minimum a developer can statically partition the device such that graphics can compute can run in parallel. For a PC, static partition would be horrible because of the different GPU configurations to support, but for a dedicated console, this is all you need. This opens up a much easier way to hide small compute jobs in a sea of GPU filling graphics work like post processing or shading. The way I do this on PC now is to abuse vertex shaders for full screen passes (the first triangle is full screen, and the rest are degenerates, use an uber-shader for the vertex shading looking at gl_VertexID and branching into "compute" work, being careful to space out the jobs by the SIMD width to avoid stalling the first triangle, or loading up one SIMD unit on the machine, ... like I said, complicated). In any case, this Dual ACE system likely makes it practical to port over a large amount of the Killzone SPU jobs to the GPU even if they don't completely fill the GPU (which would be a problem without complex uber-kernels on something like CUDA on the PC).

Dual High Performance DMA Engines :: Developers would get access to do async CPU->GPU or GPU->CPU memory transfers without stalling the graphics pipeline, and specifically ability to control semaphores in the push buffer(s) to insure no stalls and low latency scheduling. This is something the PC APIs get horribly wrong, as all memory copies are implicit without really giving control to the developer. This translates to much better resource streaming on a console.

Support for upto 6 Audio Streams :: HDMI supports audio, so the GPU actually outputs audio, but no PC driver gives you access. The GPU shader is in fact the ideal tool for audio processing, but on the PC you need to deal with the GPU->CPU latency wall (which can be worked around with pinned memory), but to add insult to injury the PC driver simply just copies that data back to the GPU for output adding more latency. In theory on something like a PS4 one could just mix audio on the GPU directly into the buffer being sent out on HDMI.

Global Data Store :: AMD has no way of exposing this in DX, and in OpenGL they only expose this in the ultra-limited form of counters which can only increment or decrement by one. The chip has 64KB of this memory, effectively with the same access as shared memory (atomics and everything) and lower latency than global atomics. This GDS unit can be used for all sorts of things, like workgroup to workgroup communication, global locks, or like doing an append or consume to an array of arrays where each thread can choose a different array, etc. To the metal access to GDS removes the overhead associated with managing huge data sets on the GPU. It is much easier to build GPU based hierarchical occlusion culling and scene management with access to these kind of low level features.

Re-used GPU State :: On a console with low level hardware access (like the PS3) one can pre-build and re-use command buffer chunks. On a modern GPU, one could even write or modify pre-built command buffer chunks from a shader. This removes the cost associated with drawing, pushing up the number of unique objects which can be drawn with different materials.

FP_DENORM Control Bit :: On the console one can turn off both DX's and GL's forced flush-to-denorm mode for 32-bit floating point in graphics. This enables easier ways to optimize shaders because integer limited shaders can use floating point pipes using denormals.

128-bit to 256-bit Resource Descriptors :: With GCN all that is needed to define a buffer's GPU state is to set 4 scalar registers to a resource descriptor, similar with texture (up to 8 scalar registers, plus another 4 for sampler). The scalar ALU on GCN supports block fetch of up to 16 scalars with a single instruction from either memory or from a buffer. It looks to be trivially easy on GCN to do bind-less buffers or textures for shader load/stores. Note this scalar unit has it's own data cache also. Changing textures or surfaces from inside the pixel shader looks to be easily possible. Note shaders still index resources using an instruction immediate, but the descriptor referenced by this immediate can be changed. This could help remove the traditional draw call based material limit.

S_SLEEP, S_SETPRIO, and GDS :: These provide all the tools necessary to do lock and lock-free retry loops on the GPU efficiently. DX11 specifically does not allow locks due to fear that some developer might TDR the system. With low level access, the S_SLEEP enables placing wavefront to sleep without busy spinning on the ALUs, and the S_SETPRIO enables reducing priority when checking for unlock between S_SLEEPs.

S_SENDMSG :: This enables a shader to force a CPU interrupt. In theory this can be used to signal to a real-time OS completion of some GPU operation to start up some CPU based tasks without needed the CPU to poll for completion. The other option would be maybe a interrupt signaled from a push buffer, but this wouldn't be able to signal from some intermediate point during a shader's execution. This on PS4 might enable tighter GPU and CPU task dependencies in a frame (or maybe even in a shader), compared to the latency wall which exists on non-real-time OS like Windows which usually forces CPU and GPU task dependencies to be a few frames apart.

Full Cache Flush Control :: DX has only implicit driver controlled cache flushes, it needs to be conservative, track all dependencies (high overhead), then assume conflict and always flush caches. On a console, the developer can easily skip cache flushes when they are not needed, leading to more parallel jobs and higher performance (overlap execution of things which on DX would be separated by a wait for machine to go idle).

GPU Assembly :: Maybe? I don't know if GCN has some hidden very complex rules for code generation and compiler scheduling. The ISA docs seem trivial to manage (manual insertion of barriers for texture fetch, etc). If Sony opens up GPU assembly, unlike the PS3, developers might easily crank out 30% extra from hand tuning shaders. The alternative is iterating on Cg, which is possible with real-time profiling tools. My experience on PC is micro-optimization of shaders yields some massive wins. For those like myself who love assembly of any arch, a fixed hardware spec is a dream.

...

I could continue here, but I'm not, by now you get the picture, launch titles will likely be DX11 ports, so perhaps not much better than what could be done on PC. However if Sony provides the real-time OS with libGCM v2 for GCN, one or two years out, 1st party devs and Sony's internal teams like the ICE team, will have had long enough to build up tech to really leverage the platform.

I'm excited for what this platform will provide for PS4-only 1st party titles and developers who still have the balls to do a non-portable game this next round.


Source: http://timothylottes...nd-durango.html

#22 Yusuf M.

Yusuf M.

  • Tech Issues Solved: 1
  • Joined: 25-May 04
  • Location: Toronto, ON
  • OS: Windows 8.1 Pro
  • Phone: OnePlus One 64GB

Posted 24 January 2013 - 15:32

I hope they bump it up to 6 GB for the retail console.

Interesting article indeed, Audioboxer. Going by the PS3, it's logical for the PS4 to use LibGCM. I wonder if it'll also use PSGL. I think the next-gen Xbox will have an early advantage because of D3D11.1. A lot of developers are used to D3D and if history repeats itself, the good titles (with Uncharted or Killzone visuals) will come out later on the PS4.

#23 TheLegendOfMart

TheLegendOfMart

    Neowinian Senior

  • Joined: 01-October 01
  • Location: England

Posted 24 January 2013 - 15:44

They won't. GDDR5 is very expensive. 4Gb is plenty.

#24 Blackhearted

Blackhearted

    .....

  • Joined: 26-February 04
  • Location: Ohio
  • Phone: Samsung Galaxy S2 (VM)

Posted 24 January 2013 - 17:32

I hope they bump it up to 6 GB for the retail console.


Why? Just to match what the recent reports say about the xbox 3 having more ram? There's no reason to do that if those same reports are true and the xbox 3's gpu is as far behind in raw horsepower as they claim.

#25 Yusuf M.

Yusuf M.

  • Tech Issues Solved: 1
  • Joined: 25-May 04
  • Location: Toronto, ON
  • OS: Windows 8.1 Pro
  • Phone: OnePlus One 64GB

Posted 24 January 2013 - 18:18

They won't. GDDR5 is very expensive. 4Gb is plenty.

That's most likely true but there's always hope. The more RAM, the better.

Why? Just to match what the recent reports say about the xbox 3 having more ram? There's no reason to do that if those same reports are true and the xbox 3's gpu is as far behind in raw horsepower as they claim.

Developers want more RAM, regardless of the speed. Microsoft doubled the amount of RAM in the Xbox 360 prior to its release after Epic Games showed them what Gears of War looked like (with more memory). It cost a lot of money but they did it. Can you imagine how what the Xbox 360 would be like if it had 256 MB of RAM instead of 512 MB?

#26 TheLegendOfMart

TheLegendOfMart

    Neowinian Senior

  • Joined: 01-October 01
  • Location: England

Posted 24 January 2013 - 18:29

That's most likely true but there's always hope. The more RAM, the better.

Not really, there are several rumours floating about including rumours from current developers that the Durango RAM setup is less than ideal.

Take it with a grain of salt but some have said that the framebuffer can only register 1Gb of ram per frame, 2Gb for 3D.

Also have a read here, a developers eye view of both consoles, he isn't privy to next gen specs but gives his opinion on the current rumoured specs. Below is Durango.

Working here assuming the Eurogamer Article is close to correct. On this platform I'd be concerned with memory bandwidth. Only DDR3 for system/GPU memory pared with 32MB of "ESRAM" sounds troubling. 32MB of ESRAM is only really enough to do forward shading with MSAA using only 32-bits/pixel color with 2xMSAA at 1080p or 4xMSAA at 720p. Anything else to ESRAM would require tiling and resolves like on the Xbox360 (which would likely be a DMA copy on 720) or attempting to use the slow DDR3 as a render target. I'd bet most titles attempting deferred shading will be stuck at 720p with only poor post process AA (like FXAA). If this GPU is pre-GCN with a serious performance gap to PS4, then this next Xbox will act like a boat anchor, dragging down the min-spec target for cross-platform next-generation games.



#27 OP Audioboxer

Audioboxer

    Hermit Arcana

  • Joined: 01-December 03
  • Location: UK, Scotland

Posted 24 January 2013 - 19:06

With how good the Sony engineers have been at shrinking the PS3 OS but continuing to add more to it, I'd be much happier with faster memory (and CPU/GPU ;)) over more. 4GB is much more than the PS3 with it's strangely split 512mb.

Or I will put it this way, we're not seeing more than 4GB of DDR5 as it'll give near another £500 PS3, which I hardly see Sony wanting this time around. I think it'll launch at £349/399 across two SKUs.

#28 Yusuf M.

Yusuf M.

  • Tech Issues Solved: 1
  • Joined: 25-May 04
  • Location: Toronto, ON
  • OS: Windows 8.1 Pro
  • Phone: OnePlus One 64GB

Posted 24 January 2013 - 19:09

Not really, there are several rumours floating about including rumours from current developers that the Durango RAM setup is less than ideal.

Take it with a grain of salt but some have said that the framebuffer can only register 1Gb of ram per frame, 2Gb for 3D.

Also have a read here, a developers eye view of both consoles, he isn't privy to next gen specs but gives his opinion on the current rumoured specs. Below is Durango.

That's very troubling. I really hope Microsoft doesn't drop the ball with Durango. I assumed the GPU would be similar to the PS4's GPU (based on AMD's GCN architecture). It just doesn't make any sense for them to use something older like AMD's VLIW4 architecture (e.g. Radeon HD 6000 series).

#29 Blackhearted

Blackhearted

    .....

  • Joined: 26-February 04
  • Location: Ohio
  • Phone: Samsung Galaxy S2 (VM)

Posted 24 January 2013 - 19:14

Not really, there are several rumours floating about including rumours from current developers that the Durango RAM setup is less than ideal.

Take it with a grain of salt but some have said that the framebuffer can only register 1Gb of ram per frame, 2Gb for 3D.

Also have a read here, a developers eye view of both consoles, he isn't privy to next gen specs but gives his opinion on the current rumoured specs. Below is Durango.


That's one thing i was worried about since long before these reports on the specs started coming. That Microsoft would include some kind of edram again, and also once again not make it big enough.

#30 Colin McGregor

Colin McGregor

    Neowinian Senior

  • Joined: 02-September 11
  • Location: Ontario, Canada
  • OS: Windows 8 x64, Gentoo x64 Sometimes
  • Phone: Samsung Ativ S WP8

Posted 24 January 2013 - 19:16

Sony can screw the controller up all they want. As long as MS keeps the 360 controller I'm good.