Intel / AMD & AMD / NVIDIA
118 members have voted
Recently Browsing 0 members
No registered users viewing this page.
By Ather Fawaz
Intel shows promising progress and key advances in integrated photonics for data centers
by Ather Fawaz
Image via Intel Press Kit The effective management, control, and scaling of electrical input/output (I/O) are crucial in data centers today. Innovative ideas like Microsoft's Project Natick, which submerged a complete data center underwater, and optical computing and photonics, which aim to use light as a basic energy source in a device and for transferring information.
Building on this, at the Intel Labs Day 2020 conference today, Intel highlighted key advances in the fundamental technology building blocks that are a linchpin to the firm's integrated photonics research. These building blocks include light generation, amplification, detection, modulation, complementary metal-oxide-semiconductor (CMOS), all of which are essential to achieve integrated photonics.
Among the first noteworthy updates, Intel showed off a prototype that featured tight coupling of photonics and CMOS technologies. This served as a proof-of-concept of future full integration of optical photonics with core compute silicon. Intel also highlighted micro-ring modulators that are 1000x smaller than contemporary components found in electronic devices today. This is particularly significant as the size and cost of conventional silicon modulators have been a substantial barrier to bringing optical technology onto server packages, which require the integration of hundreds of these devices.
The key developments can be summarized as follows:
These results point towards the extended use of silicon photonics beyond the upper layers of the network and onto future server packages. The firm also believes that it paves a path towards integrating photonics with low-cost, high-volume silicon, which can eventually power our data centers and networks with high-speed, low-latency links.
Image via Intel Press Kit “We are approaching an I/O power wall and an I/O bandwidth gap that will dramatically hinder performance scaling", said James Jaussi, who is the Senior Principal Engineer and Director of the PHY Lab at Intel Labs. He signaled that the firm's "research on tightly integrating photonics with CMOS silicon can systematically eliminate barriers across cost, power, and size constraints to bring the transformative power of optical interconnects to server packages.”
By Ather Fawaz
Intel Labs Day 2020: Robotics demonstrations and a next-gen neuromorphic chip on the horizon
by Ather Fawaz
Loihi, Intel’s neuromorphic research chip. Image via Intel Press Kit Neuromorphic computing, as the name implies, aims to emulate the human brain's neural structure for computation. It's a relatively recent idea and one of the radical takes on contemporary computer architectures today. Work on it has been gaining traction, and promising results have come up; as recently as June this year, a neuromorphic device was used to recreate a gray-scale image of Captain America’s shield.
Alongside other notable announcements at Intel Labs Day 2020, the firm also gave us an update on the progress with its Intel Neuromorphic Research Community (INRC). The aim of the INRC is to expand the applications of neuromorphic computing in business use cases. This consortium, which originally came together in 2018 and includes some Fortune 500 and government members, has now been expanded to over 100 companies and academics with new additions like Lenovo, Logitech, Mercedes-Benz, and Prophesee. Moreover, Intel also highlighted some research results coming out of the INRC computed on the company’s neuromorphic research test chip, Loihi, at the virtual conference.
Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Image via Intel Press Kit Researchers showcased two state-of-the-art neuromorphic robotics demonstrations. In the first demonstration by Intel and ETH Zurich, Loihi was seen adaptively controlling a horizon-tracking drone platform. It achieved closed-loop speeds up to 20kHz with 200µs of visual processing latency, a 1,000x gain in combined efficiency and speed compared to traditional solutions. In the second demonstration, the Italian Institute of Technology and Intel showed the operation of multiple cognitive functions like object recognition, spatial awareness, and real-time decision-making, all running together on Loihi in IIT’s iCub robot platform.
Other updates highlighted in the conference include:
Moving forward, Intel will be integrating the takeaways accrued from experiments over the last couple of years into the development of the second generation of its Loihi neuromorphic chip. While the technical details of the next-gen chip are still nebulous, Intel says that it is on the horizon and "will be coming soon".
By Ather Fawaz
Intel debuts Horse Ridge II, a cryogenic quantum control chip with two key features
by Ather Fawaz
Image via Intel Press Kit Last year, Intel debuted its first-generation Horse Ridge cryogenic control chip to make quantum computing more commercially viable. The SoC targeted control electronics and interconnections within quantum computers to reduce the complexity of controlling and managing quantum circuits.
Today, at Intel Labs Day 2020, the tech giant unveiled the next iteration of its Horse Ridge SoC. Dubbed Horse Ridge II, the new SoC is implemented using 22nm low-power FinFET technology (22FFL) and its functionality has been verified at temperatures as low as 4 kelvins, just like last-gen. However, Horse Ridge II builds on its predecessor and adds two vital features: the ability to manipulate and read qubit states, and the ability to control the potential of several gates required to entangle multiple qubits.
The new SoC also has a programmable microcontroller that performs additional filtering on pulses to reduce crosstalk between qubits. Intel plans to detail the full technical specifications of Horse Ridge II during the International Solid-State Circuits Conference (ISSCC) in February next year.
First-generation Intel Horse Ridge. Image via Intel Press Kit Jim Clarke, who is the Director of Quantum Hardware, Components Research Group at Intel, believes that the new SoC will streamline quantum circuit controls, which will subsequently allow greater fidelity at decreased power output, and inch us “one step closer toward the development of a ‘traffic-free’ integrated quantum circuit.”
By Ather Fawaz
Intel unveils ControlFlag, a machine programming tool that detects errors in code
by Ather Fawaz
Today, at Intel Labs Day 2020, Intel revealed ControlFlag, a machine programming system that uses machine learning to detect errors in code. Trained on over 1-billion unlabeled lines of production-quality code that contained various bugs, ControlFlag uses a technique dubbed as ‘anomaly detection’ to detect traditional coding patterns and identify any potential anomalies in code that are likely to cause a bug, irrespective of the programming language.
The system extends the tech giant’s Rapid Analysis of Developers project that aims to help software engineers and researchers write code faster. It uses unsupervised learning to train itself to identify patterns and stylistic choices in code. Intel notes that ControlFlag understands code in a way that it does not characterize a difference in stylistic choices as a syntax error just because it is ‘written differently’. An apt analogy would be to compare its working to a traditional grammar-checking tool that checks a given sentence or a set of words in the English language for correctness.
When put to the test, ControlFlag was able to identify bugs in production-quality code. In one case, it even identified an anomaly in a cURL code that had not been previously recognized when developers were reviewing the code. Furthermore, in-house, Intel has already started using the system in software and firmware productization.
Justin Gottschlich, who is the Principal Scientist, Director andFounder of Machine Programming Research at Intel Labs believes that the system can “dramatically reduce the time and money required to evaluate and debug code.” This would be beneficial, he added, since, “According to studies, software developers spend approximately 50% of the time debugging. With ControlFlag, and systems like it, I imagine a world where programmers spend notably less time debugging and more time on what I believe human programmers do best — expressing creative, new ideas to machines.”
By Abhay V
Possible Surface Laptop 4 and Surface Pro 8 benchmarks spotted
by Abhay Venkatesh
Microsoft is expected to launch the Surface Pro 8 and Surface Laptop 4 sometime in January sporting minor spec bumps. Images of the devices were spotted passing through certification recently, corroborating reports of a minor update. The Pro 8 and Laptop 4 are expected to be powered by Intel’s 11th-gen Tiger Lake processors, with the Laptop 4 also said to offer AMD versions, just like it’s predecessors.
Now, possible benchmarks of the devices have been spotted, hinting at the processor SKUs expected to make it to the Redmond firm’s offerings. The two benchmark listings refer to “OEMWY” and “OEMGR” product names, codenames that are similar to what Microsoft has used in the past for other Surface PCs. One of the listings sports a system with an Intel Core i7-1185G7 processor with Iris Xe graphics, while the other features an AMD Ryzen 7 3780U Microsoft Surface Edition processor.
It is possible that the device with the Intel part – which is a top-of-the-line Tiger Lake SKU – could be the 13.5-inch Laptop 4, the 15-inch business version, or even the Surface Pro 8. However, the more interesting of the two is the one sporting the same AMD processor as the Surface Laptop 3.
The “Renoir” codename and the much higher overall scores hint that the chip is possibly not the 3000-series offering, and could be a Zen 2-based processor, coming from the Ryzen 4000-series laptop CPUs. Interestingly, the numbers beat the generic Ryzen 7 4700U benchmarks. This could be due to the optimizations done by the company.
The Ryzen 5000-series laptop chips are expected to be launched early next year, so it seems less likely that those chips will make it to the Surface line. However, it will be interesting to see if the firm has any surprises in store for the Surface line.
The Surface Pro 8 and the Laptop 4 are expected to be launched without much fanfare sometime in January. There are murmurs of a possible black version of the Go 2 as well, accompanied by the expansion of the availability of the Surface Duo.
Source: Geekbench 5(1)(2) via WindowsLatest