NVIDIA Tegra K1 benchmarks are out, and they're heads and shoulders above the competition

The Nvidia K1 benchmarks on AnTuTu have come out and they're leaps and bounds ahead of everything else. Last month we reported on the NVIDIA Tegra K1 that was announced at CES 2014, and that the Tegra K1 SoC was believed to operate at  2.5Ghz. However there is now some confirmation that the 64-bit dual-core variation is running at a massive frequency of 3.0Ghz. The Tegra K1 GPU is based on the Kepler architecture found in the company's latest PC GeForce graphics cards with 192 cores.

The dual-core variation scored a giant 43617 on AnTuTu, and the quad-core variation scored a slightly higher 43851 points. For reference, the Snapdragon 805 scores 37780, the Snapdragon 801 scores 36469, and when looking towards Mediatek's MTK6592 eight-core SoC offering we see an even smaller AnTuTu score of 30,000 or so points depending on clock speed.

There is no detail on what device the SoCs were installed and benchmarked on, however from the AnTuTu results we know that it has a 1080p screen, 2GB RAM, and running Android 4.4.2.

The Tegra K1, initially demoed as "Project Logan", was promoted as being able to deliver graphics found in PC's and consoles to the smartphone world.

The quad-core 32-bit Tegra K1 is expected to start showing up in devices over the next few months, whereas the "Denver" dual-core 64-bit variation will start to show up in products sometime in the second half of the year. The future of graphics on smartphones and other embedded devices is certainly becoming more exciting each and every single week, especially now that 1440p phones are starting to pop-up. It would be great to get statistics on power consumption and energy efficiency as well, as the make-or-break for high(er) resolution displays is really a matter of whether or not it will put an unreasonable strain on battery life.

Source: MyDrivers | Images via MyDrivers, NVIDIA

Report a problem with article
Previous Story

DRAM owners can claim a minimum refund of $10 in epic $310 million settlement

Next Story

Review: Microsoft Sculpt Comfort Desktop

36 Comments

Commenting is disabled on this article.

francescob said,
Is there any comparison against Silvermont and Airmont Atoms?

Silvermont I could see asking for but, as far as Airmont, how could there be a comparison against something that currently isn't available?

SharpGreen said,

Silvermont I could see asking for but, as far as Airmont, how could there be a comparison against something that currently isn't available?

Weren't some cherry trail (airmont) benchmarks results shown during MWC 2014?

A couple of points, if I may...
1) Since Nvidia does not plan to have the 64-bit K1 available until towards the end of the year, I suspect that they did not issue these benchmarks themselves. Further, any creatively acquired benchmarks are undoubtably from a prototype and, as such, would not have proper drivers/boards/apps/etc. Nor would such a prototype be the final debugged version of the chip. The benchmark numbers are essentially meaningless.
2) A 192-core GPU is not something you *really* need in a phone. The K1 line, like most Tegra chips, are most likely aiming for the tablet or clamshell markets; something where a 1920x1080 display really comes to life. Otherwise, they would have included their own proprietary LTE modem (like they do in the Tegra 4i). So, speculating on power envelopes for phone usage is not worth the time.
3) Shifting to a 64-bit core is another indication that they are moving closer to a tablet/clamshell platform that is targeted at more "mainstream" software like gaming or even (gulp!) microsoft apps.
4) These days, the real money is to be made in China. You are unlikely to see a large Nvidia presence in cellphones in North America. Qualcomm has the North American cellphone market sewn up because of the legacy requirement for their developed-in-house CDMA technology. However, the days of CDMA are fading, and voice-over-LTE (like all new networks in China) will become the new standard.

The closest comparison on the desktop is to Maxwell (which is derived from the GPU side of TegraK1), and specifically GTX750 Ti. And for those that have been in a cave, GTX750, even in Ti trim, is a 60W TDP part, and despite that 128-bit memory bus, it whacks GTX550Ti, and trades blows with up to GTX660 (both of which have taller memory buses AND require more power). It's also selling like gangbusters - it's outstripping supply, and thus prices have already begun heading north.

swanlee said,
They keep upgrading Tegra and almost nothing uses new versions. Kind of odd

Too true! I had a Zune HD and I think that was the only device that used Tegra 1. And then there was the Atrix 4G and again I think that was the only device that used the Tegra 2 lol.

SiCr said,
Nvidia's Tegra K1 draws a shocking number of Watts

http://semiaccurate.com/2014/0...raws-shocking-number-watts/

they based that off a power supply... which means nothing... it was a demo unit, they could of just used a standard power brick... if they put a kill-a-watt on it and got a number then I'd be shocked... doing normal electrical math to get watts from voltage and potential amps is meaningless... just means that brick can support that much

of course it does, they just release benchmarks with performance numbers but without stating the intended form factor of those configurations. Classic Nvidia.

192 Kepler cores running at 950mhz in a < 5w envelope for tablets? How can they do that when their mobile 384 cores version (GK208, GT 735M) consumes 20w at 889mhz, and that still with a measly 64-bit interface?

Benchmark all you want Nvidia, but please show me a real product producing those numbers, not a controlled test bench with active cooling.

So the K1 for phones is still 6-8 months away. I don't think that Qualcomm needs to be afraid of this. I'm sure they already have an upgrade for its SoC that is similar in performance and uses less power overall. The K1 is similar in power consumption but it doesn't offer the same level of integration. Therefore a smartphone needs additional chipsets and will end up using more power overall.

Won't they just pair these large chips with smaller ones that will power the lesser functions of the device, like making calls? Then they could kick in the power-hungry chips for benchmarks and games (at the cost of a less snappy system overall). Eventually we'll have to ask ourselves whether 1 hour of amazing graphics is worth slowing down the rest of the system.

Interesting... However, I am a bit surprised the ARMv8 isn't giving a bit more of a boost, as not all the benchmarks are an equal gain for the Quad core versus the Dual core.

(ARMv8 is an interesting read as it gains a lot of performance that is less relevant to being 64bit than how they changed the architecture to be more CISC like.)

While all is speculation, it should be Denver in the 20nm 800 series. Just one more reason why they're taking so long until their fall release, 20nm + Denver. As 750ti demonstrates, the Maxwell architecture is done and ready to go, it's those other 2 factors that need to be ready.

Minor correction: The 32-bit K1 itself is just another derivative of the ARM Cortex A15, ie. Not based on Kepler. It's GPU is however.

SharpGreen said,
Minor correction: The 32-bit K1 itself is just another derivative of the ARM Cortex A15, ie. Not based on Kepler. It's GPU is however.

yep, 32bit is merely a quad core cortex a15 chip. 64bit chip is a dual core custom cpu core chip. It will be interesting to see the power consumption and battery life of devices that use the 64bit chip.

I'll wait to see real world performance. I don't trust benchmarks after seeing the number of manufacturers who cheat them.

Plus tegra never makes it into many devices.

I wholly agree with you, although benchmarks and stats are useful to compare raw performance it's what works in the real world that counts.

McKay said,
Plus tegra never makes it into many devices.
There's the issue. Hopefully this isn't too little too late from nVidia.

Auzeras said,
I wholly agree with you, although benchmarks and stats are useful to compare raw performance it's what works in the real world that counts.

Unfortunately, the OP is correct, while benchmarks are often good to compare performance, companies often cheat (especially with with press shot comparisons), so we really don't know what the performance is until we get them in our hands and use benchmarks that they didn't use.

I use more my mobile phone than my computer. It will be interesting to see how these smartphones will one day be equal or more powerful than a PC in terms of rendering graphics and games.

macoman said,
I use more my mobile phone than my computer. It will be interesting to see how these smartphones will one day be equal or more powerful than a PC in terms of rendering graphics and games.
That will never happen. They will pass what PCs can do today but no matter how advanced a mobile GPU becomes, you will always be able to stick 10 of them in a PC and get better performance because of the bigger power envelope vs a mobile device.

They'll never catch up - there simply isn't enough power. Mobiles may be able to do some things faster, but that's because they have the luxury of certain optimizations and streamlined (stripped down) programs.

Unless, of course, bandwidth increases to the point where games can be rendered in the cloud and streamed to the device.

They will eventually. But probably not for the reasons you think. Eventually, the ratio of people with mobile ARM devices to PCs will be so great, that most game development will stagnate on the PC and only proceed on mobile devices.

This will lead to stagnation in PC GPU development as well, not because of the tech, but because of lack of a market.