Next Generation Graphics from ATI *56K*


Recommended Posts

the differanc would be speed and efficentcy

and since  the since ATIs new cards finally are able to render and use the same feature nvidia has been using for  2 year  like Real HDR renderings  real Displacement  mapping and paralexx mapping and tone mapping and many otehr features  it is easy to say  you should be fine on both sides  on  the Eye candy leavle.  the  Older genration ATIs sutch as the 9600 pro up to the X850XT or Pe what ever  it is lacks all  of the functions and  is all still based  on original  Ms 2.0 Shadersfrom 2002 Documents wher at least the  new  ATI cards and Nvidas  GF6/7 have been based on Shader  modle 3 and have the power for those  advancedc features  i mentiond  and can properly run them

586640997[/snapback]

You are a little wrong there. OpenEXR HDR, displacement mapping, parallex mapping, and real tone mapping are not features of Shader Model 3.0. The reason why ATI's older generation cards can not do them is those features require 32-bit shader precision, which all of NVIDIA's DirectX 9.0 generation cards (from the GeForce FX to the GeForce 7800 series) can do, along with 16-bit half-precision. ATI's cards have been stuck at 24-bit precision up unto now. As far as I know, there is no actual visable effect you can do in SM 3.0 that you can't do in SM 2.0; the differences are all in the coding efficiency and making things easier for the developer.

By the way, ATI's new cards do have one small advantage over NVIDIA's cards: the X1K series always run in 32-bit full-precision, there is no 16-bit half-precison mode like all of NVIDIA's cards. In newer games this probably doesn't matter, but in older, first- and second-generation DirectX 9.0 games NVIDIA's cards will run anywhere from 10-90% of the shader programs in 16-bit half-precision, which won't happen with ATI's X1K series.

Link to comment
Share on other sites

Not necessarily. First off, Microsoft has ditched the whole Windows Graphics Foundation (WGF) concept. Instead, any video card with hardware pixel and vertex shader 2.0 compliancy will be able to run Vista's interface at full "eye candy" mode. Then, Vista will ship with a new DirectX 10 API for 3D games. DirecX 9.0 (which is backwards-compatible with older versions) along with OpenGL will run on a software emulation layer of DirectX 10.

Now, Microsoft is also releasing a new DirectX for Windows XP, DirectX 9.0L (current name). It is my understanding that DX 9.0L and DX 10 are pretty much the exact same thing, except DX 10 has done away with backwards-compatibility and probably contains a few tweaks for Vista's new vector-based 3D interface. Keep in mind we are talking about software APIs, not hardware. So if DX 10 does have extra goodies for Vista, it is only for making life easier for programmers; it does not mean at all that hardware (i.e. video cards) have to implement something. So, in other words, GeForce 7800 and Radeon X1800 series are going to be able to run Vista just fine. They simply will be missing the unified Shader Model 4.0 standard that will be introduced with DX 9.0L and DX 10, but that is only for games. And look at how long it took games to use Shader Model 3.0 (we still have, what, two of them total now?).

586638147[/snapback]

Ah. Thanks for clearing that up. (Y)

Link to comment
Share on other sites

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.