Recently Browsing 0 members
No registered users viewing this page.
By Steven P.
Early Black Friday deal: $20.99 off the NVIDIA SHIELD
by Steven Parker
Nvidia has let us know that they are discounting the SHIELD on Amazon by $20.99, this means you can now snap it up for $129. The tubular SHIELD streaming device launched just over a year ago at $149.99 and has generally held its launch price ever since.
As a reminder the NVIDIA SHIELD has a Tegra X1+ processor, which is 25% faster than the models powered by the Tegra X1, and supports Dolby Vision and Dolby Atmos. It's also capable of upscaling 720p or 1080p content to 4K.
It comes with 2GB of RAM and 8GB of internal storage, and a microSD card slot. The SHIELD supports dual-band Wi-Fi and Bluetooth; in terms of software, it comes with Android Pie out of the box, and Chromecast 4K built-in.
There's also a remote included. The triangular-shaped controller has backlit buttons that turn on when motion is detected, a built-in mic for Google Assistant queries, and a remote locator in case you misplace it. It connects to the Shield TV via Bluetooth, but it also has an IR blaster for use with your TV.
Unfortunately there is no word on a discount for the SHIELD TV Pro, which still costs $199.99 so if you are hoping to bag that model on a discount, you may have to wait until Black Friday passes, as this deal will stay live until Monday, November 30th 11:59 PM PT.
Get the NVIDIA SHIELD for $129 (MSRP $149.99) 14% off.
Check out our entire Black Friday and Cyber Monday coverage here.
As an Amazon Associate, Neowin may earn commission from qualifying purchases.
Nvidia's GeForce Now is now available on iOS, Fortnite coming soon
by João Carrasqueira
Nvidia has announced that its game streaming service, GeForce Now, is now available in beta for users on iOS through the Safari web browser, albeit in beta form. iOS has typically been a challenge for cloud gaming services, with Google Stadia and Microsoft's Xbox Game Pass Ultimate cloud streaming only being available on Android. However, Amazon kickstarted the idea of launching on iOS through a web browser with its Luna service, and now Nvidia is joining the fray.
If you have an iPhone or iPad with iOS or iPadOS 14.2 or later, you can access the newly-launched beta version of GeForce Now through this link. Nvidia also recommend a using a gamepad for playing, as mouse and keyboard aren't fully supported due to limitations in Safari. You can find a list of officially supported gamepads here, but it does include Xbox Wireless Controller models with Bluetooth.
The launch of GeForce Now on iOS means that Fortnite will officially available on Apple devices, but it isn't yet. While Fortnite is a cross-platform game and available on GeForce Now, the PC and mobile versions are actually quite different, and Nvidia is working with Epic Games to release a version of Fortnite that feels familiar to users on iOS. As such, touch controls have to be added, and thus the game will take a while longer to be available.
In addition to the iOS launch, Nvidia also announced that it's adding support for the Chrome browser on Linux, PC, and Android early next year. Chrome has already been supported on ChromeOS since a beta launched earlier this summer.
Nvidia also announced that it's working with GOG.com to add its library of gamers to the GeForce Now catalog, starting with games like The Witcher 3: Wild Hunt and Cyberpunk 2077. Finally, GeForce Now is expanding to Saudi Arabia thanks to a partnership with Zain KSA. These expansions are also coming soon.
By Ather Fawaz
FPGA-based inference accelerator outscores GPUs and ASICs in MLPerf benchmark
by Ather Fawaz
Image via Pexels Silicon Valley-based startup Mipsology announced today that its Zebra AI inference accelerator achieved the highest efficiency based on the MLPerf inference test. The benchmark, which measures training and inference performance of ML hardware, software, and services, pitted Mipsology's FPGA-based Zebra AI accelerator against venerable data center GPUs like the Nvidia A100, V100, and T4. Comparisons were also drawn with AWS Inferentia, Groq, Google TPUv3, and others.
The results above show Zebra running on the Xilinx Alveo U250, U200, U50LV, and ZU7EV accelerator cards. The efficiency of computation in FPS/TOPS of the ResNet 50 architecture was significantly higher on Zebra, with the Xilinx Alveo U250 achieving more than 2x higher peak performance efficiency compared to all other commercial accelerators. Ludovic Larzul, CEO and founder of Mipsology, added that these results are from MLPerf's 'closed' category, which is a rigorous test that requires the same model and optimizer as the reference implementation for drawing comparisons:
The results show the computational advantages of specialized FPGA-based technologies, which are gradually making their way into the market. They also add to the critique of TOPS (Tera Operations per Second) being a direct indicator of computational performance. With a peak TOPS of 38.3, the Zebra-powered Alveo U250 accelerator card significantly outperformed competitors in terms of throughput per TOPS, exhibiting performance close to a Tesla T4 on the MLPerf v0.7 inference results, despite having 3.5x fewer TOPS.
"Perhaps the industry needs to stop over-relying on only increasing peak TOPS. What is the point of huge, expensive silicon with 400+ TOPS if nobody can use the majority of it?”, questioned Larzul, pointing to the diminishing returns of pumping in more TOPS on the foundations of contemporary circuitry.
Nvidia 457.30 WHQL driver brings AC Valhalla and Black Ops Cold War support
by Pulasthi Ariyasinghe
As more high-profile games near their release dates, Nvidia has put forth a new graphics driver aimed at providing support for them. The WHQL-certified 457.30 Game Ready driver has support for Call of Duty: Black Ops Cold War (November 13), Assassin's Creed Valhalla (November 10), and Godfall (November 12).
In addition to those, Nvidia has also implemented Reflex support for Destiny 2, promising lower input latency by up to 49% in the Bungie title. The SLI profile for Black Desert has been also updated, while four monitors have received G-Sync compatibility validation, them being Acer CP3271U V, ASUS XG27AQ, MSI MAG274QRF, and Xiaomi Mi 245 HF.
Several issues have been fixed in this release also:
Here are the known issues:
The 457.30 WHQL Game Ready driver is available for download through the GeForce Experience app or the links listed below. The release notes for the driver can be seen here.
Download: Windows 7, 8, 8.1 | Windows 10 – Standard / DCH
Download: Windows 7, 8, 8.1 | Windows 10 - Standard / DCH
By Ather Fawaz
Amazon unveils the next generation of EC2 instances rocking the latest Nvidia A100 GPUs
by Ather Fawaz
Image via Nvidia Today, Amazon unveiled its next generation of Amazon Elastic Compute Cloud (Amazon EC2) GPU-powered instances. Dubbed P4d, each EC2 instance will house eight of Nvidia's latest A100 Tensor Core GPUs rocking the Ampere architecture, which will be delivering 2.5 petaflops of mixed-precision performance and 320 GB of high bandwidth GPU memory in a single machine. To complement this, the new P4d instances will also feature 96 Intel Xeon Scalable (Cascade Lake) vCPUs, a total of 1.1 TB of system memory, and 8 TB of local, fast NVMe storage.
In addition to the raw compute power provided by the CPU and GPU, at 400 Gbps, the total network bandwidth will be 16x more than the last generation of P3 instances, which housed Nvidia V100 GPUs. And taken together, Amazon claims that the new P4d EC2 instances more than double the performance of the last generation.
By combining non-blocking petabit-scale networking infrastructure integrated with Amazon FSx for Lustre high-performance storage, AWS's Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect RDMA (remote direct memory access), Amazon is also providing the ability to create P4d instances with EC2 UltraClusters. Tailored to use cases that require the maximum compute power, these EC2 UltraClusters, can scale to over 4,000 A100 GPUs, twice that of any other cloud provider's offering.
Jargon aside, Amazon's new P4d instances will allow you to train larger and more complex machine learning models that are becoming increasingly ubiquitous today in deep learning. Inference will be sped up significantly as well. Both perks should lower initial and running costs for your use cases.
As far as pricing is concerned, AWS is only offering one configuration for the P4d instances, for now at least. The p4d.24xlarge, with eight Nvidia A100 GPUs, 96 vCPUs, 400 Gbps of network bandwidth, 8TB worth of NVMe SSDs, 19 Gbps of EBS Bandwidth, and 600 GB/s NVSwitch, will set you back $32.77 per hour. Reserving an instance for one year or three years will bring that hourly cost down to $19.22 and $11.57, respectively. Further details can be found here.