NVIDIA officially announces Tegra K1, its Kepler-based mobile chip

NVIDIA promised some news on Sunday night during its CES 2014 press conference and it delivered when it announced plans to launch the Tegra K1. Their next generation mobile processor is based on the Kepler architecture found in the company's latest PC GeForce graphics cards with 192 cores.

The Tegra K1, which has previously been demoed under the code name "Project Logan," is being promoted by NVIDIA as a way to bring graphics that are normally found in high end PCs and consoles to mobile devices. NVIDIA announced that Epic Games will port their latest graphics engine, Unreal Engine 4, so it will run on the Tegra K1. The press conference showed off a couple of Unreal Engine 4 demos running live off of the chip, showing off some impressive lighting and shadow effects, along with detailed textures and more.

The Tegra K1 will actually be released in two versions. One will be a 32-bit ARM Cortex A15 design with "4-plus-1" CPUs that will have a clock speed of up to 2.3GHz. The other version will have two 64-bit CPUs that have been designed by NVIDIA under the code name "Denver" that are based on the ARMv8 architecture, with speeds of up to 2.5 GHz. NVIDIA briefly showed off how the Denver version of the Tegra K1 ran on a Android desktop during the press conference.

The 32-bit Tegra K1 is expected to start showing up in devices in the first half of 2014 while the "Denver" 64-bit version will begin to be put into products in the second half of the year.

Source: NVIDIA | Image via NVIDIA

Report a problem with article
Previous Story

TechSpot: Asrock Z87 Extreme11/ac review: 22 SATA ports, quad GPU support and more

Next Story

Google announces partnership with major automakers to integrate Android

24 Comments

Commenting is disabled on this article.

It will be interesting to see how well the operating systems take advantage of the new platform - will we eventually see a 64bit Windows Phone 8.1 some time in the future? given how well ARM is coming along would it be too far fetched that we might see ARM based laptops in the future running Windows (not tablets, but actual laptops).

i'll be interested to see how many products end up using the Denver chip. It seems that, with laptops, the mainstream GPUs are used the most like the 630/640/650 etc. Some companies even call these gaming laptops. on the flipside, you rarely see laptops that incorporate the 680M (whatever the highest one is) or even SLI.

I'm not so sure, on the bbc news page the slideshow by nvidia claims it'll be much better than the graphic chips of the XBone and PS4 whilst using 20 times less power... I can't help but think that's slight BS.

n_K said,
Oh it's PS3 and xbox 360, my bad. Still seems a bit 'whack' though.

Why is that "whack"? Those consoles are ancient.

Enron said,
Well you'd think a mobile graphics chip would be at least in the ball park of 9 year old console tech.

Yes, I'd think it's in that kinda area, but to apparently be about 4.5 times better than the PS3 CPU (bearing in mind it's 7/8 cores and this is only 2-4) and being 3 times better than the PS3 GPU whilst using 20 times less power... Somehow, I just don't believe it.

n_K said,

Yes, I'd think it's in that kinda area, but to apparently be about 4.5 times better than the PS3 CPU (bearing in mind it's 7/8 cores and this is only 2-4) and being 3 times better than the PS3 GPU whilst using 20 times less power... Somehow, I just don't believe it.

Neither do I.

The TDP of a GT 630 is 50w alone, so there really is absolutely no-way it can match that performance and use 10 times less power then the desktop version puts out in heat alone.

n_K said,
The TDP of a GT 630 is 50w alone, so there really is absolutely no-way it can match that performance and use 10 times less power then the desktop version puts out in heat alone.

TDP has very little to do with power consumption.

Take GTX 770. It is rated at a TDP of 230W, yet its power consumption in crisis 2 peaked at 180W, about 21-22% lower.

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_770/

GeForce GT 630 (Rev. 2) has twice the amount of shaders, at 384, and much higher computational power, at 693 GFLOPS, compared to a Rev. 1 GT 630, yet its TDP is 25W compared to the latter's 50W.

There is more to this that meets the eye.

sweatshopking said,

the best part is the dx9 ps3. Never existed. Its OpenGL.

They are comparing GPU features, and PS3's GeForce is certainly a DX9 GPU, even though the console's OS itself doesn't support it.

Edited by eddman, Jan 6 2014, 2:01pm :

NeoTrunks said,
64 bit? Useless...


/s

Side Notes...

This: ARMv8 ... Makes it not useless. (For people that buy into the anti-64bit crap.)

Just like in the AMD64 world, there is more to an architectural jump than just the number of bits it can slam through it.

The ARMv8 architecture is far more than just adding 64bit computing, as they redesigned a significant portion of the architecture. The most significant is the addition of several key CISC features to the base RISC design which helps it in general processing.

(Complex models are slower up to a point; however, when that point is met, its overhead and additional functionality becomes faster than simple models. CISC vs RISC is a classic example of this principle, and ARM was smart enough to realize they hit that tipping point.)

64bit does offer more overhead for address space and larger tables/pools, but it is the internal computations along with the new architecture that are the real benefits of ARMv8 64bit.

Apple's new 64bit CPU was a bit of marketing, but even though they didn't fully optimize their Apps or iOS for 64bit, the new ARMv8 architecture itself provides a significant jump in overall performance.

Microsoft was working with ARMv8 back in late 2012 and was hoping the silicon would be ready for devices by 2013, but that didn't happen. Now maybe it will, as WP8 is ready for 64bit and the way Apps work, can be server side recompiled and optimized for it as well.


@Mobius Enigma,

Did you intentionally miss the /s?

You could've simply said "It has nothing to do with 64-bit vs. 32-bit. The reason that ARMv8 performs much better than ARMv7 is the major enhancements and changes in its microarchitecture."

P.S. It's been some time and I did some looking but couldn't find anything major. Could you kindly give some links to those "flash memory and file systems" whitepapers you mentioned?

eddman said,
@Mobius Enigma,

Did you intentionally miss the /s?

You could've simply said "It has nothing to do with 64-bit vs. 32-bit. The reason that ARMv8 performs much better than ARMv7 is the major enhancements and changes in its microarchitecture."

P.S. It's been some time and I did some looking but couldn't find anything major. Could you kindly give some links to those "flash memory and file systems" whitepapers you mentioned?

No, that is why it was Side Notes and not specifically responding to the OP.

Flash memory? Which conversation?
Were you the one claiming NTFS on Flash behaves poorly, when the OS and Flash controller handle the actual location of the bits being flipped?

If so, head over to Microsoft.com or just Search to see how Flash memory works. There is a reason most consumer level Flash has 100,000 write life span and a specific FS makes no difference as it doesn't lay on the media like a magnetic drive.


If it was something else, let me know.

Would you please stop acting like that.

If you're not going to give me a simple link, then at least give me a search query.
There are thousand of articles on Microsoft.com.

Is it really that hard to share links to some information?

eddman said,

Is it really that hard to share links to some information?

Yes. I don't like to reveal where I'm copying and pasting from.

Interesting, wonder if the Surface mini will use something like this or a snapdragon 800? The timing would be doable and would make up for the delay in the minis release. On the other hand they could have decided to delay it just to let the OEMs go first with their 8 inch devices.