Recently Browsing 0 members
No registered users viewing this page.
By Ather Fawaz
Intel shows promising progress and key advances in integrated photonics for data centers
by Ather Fawaz
Image via Intel Press Kit The effective management, control, and scaling of electrical input/output (I/O) are crucial in data centers today. Innovative ideas like Microsoft's Project Natick, which submerged a complete data center underwater, and optical computing and photonics, which aim to use light as a basic energy source in a device and for transferring information.
Building on this, at the Intel Labs Day 2020 conference today, Intel highlighted key advances in the fundamental technology building blocks that are a linchpin to the firm's integrated photonics research. These building blocks include light generation, amplification, detection, modulation, complementary metal-oxide-semiconductor (CMOS), all of which are essential to achieve integrated photonics.
Among the first noteworthy updates, Intel showed off a prototype that featured tight coupling of photonics and CMOS technologies. This served as a proof-of-concept of future full integration of optical photonics with core compute silicon. Intel also highlighted micro-ring modulators that are 1000x smaller than contemporary components found in electronic devices today. This is particularly significant as the size and cost of conventional silicon modulators have been a substantial barrier to bringing optical technology onto server packages, which require the integration of hundreds of these devices.
The key developments can be summarized as follows:
These results point towards the extended use of silicon photonics beyond the upper layers of the network and onto future server packages. The firm also believes that it paves a path towards integrating photonics with low-cost, high-volume silicon, which can eventually power our data centers and networks with high-speed, low-latency links.
Image via Intel Press Kit “We are approaching an I/O power wall and an I/O bandwidth gap that will dramatically hinder performance scaling", said James Jaussi, who is the Senior Principal Engineer and Director of the PHY Lab at Intel Labs. He signaled that the firm's "research on tightly integrating photonics with CMOS silicon can systematically eliminate barriers across cost, power, and size constraints to bring the transformative power of optical interconnects to server packages.”
By Ather Fawaz
Intel Labs Day 2020: Robotics demonstrations and a next-gen neuromorphic chip on the horizon
by Ather Fawaz
Loihi, Intel’s neuromorphic research chip. Image via Intel Press Kit Neuromorphic computing, as the name implies, aims to emulate the human brain's neural structure for computation. It's a relatively recent idea and one of the radical takes on contemporary computer architectures today. Work on it has been gaining traction, and promising results have come up; as recently as June this year, a neuromorphic device was used to recreate a gray-scale image of Captain America’s shield.
Alongside other notable announcements at Intel Labs Day 2020, the firm also gave us an update on the progress with its Intel Neuromorphic Research Community (INRC). The aim of the INRC is to expand the applications of neuromorphic computing in business use cases. This consortium, which originally came together in 2018 and includes some Fortune 500 and government members, has now been expanded to over 100 companies and academics with new additions like Lenovo, Logitech, Mercedes-Benz, and Prophesee. Moreover, Intel also highlighted some research results coming out of the INRC computed on the company’s neuromorphic research test chip, Loihi, at the virtual conference.
Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Image via Intel Press Kit Researchers showcased two state-of-the-art neuromorphic robotics demonstrations. In the first demonstration by Intel and ETH Zurich, Loihi was seen adaptively controlling a horizon-tracking drone platform. It achieved closed-loop speeds up to 20kHz with 200µs of visual processing latency, a 1,000x gain in combined efficiency and speed compared to traditional solutions. In the second demonstration, the Italian Institute of Technology and Intel showed the operation of multiple cognitive functions like object recognition, spatial awareness, and real-time decision-making, all running together on Loihi in IIT’s iCub robot platform.
Other updates highlighted in the conference include:
Moving forward, Intel will be integrating the takeaways accrued from experiments over the last couple of years into the development of the second generation of its Loihi neuromorphic chip. While the technical details of the next-gen chip are still nebulous, Intel says that it is on the horizon and "will be coming soon".
By Ather Fawaz
China launches Chang'e-5 mission to extract and bring lunar rock samples to Earth
by Ather Fawaz
Image via National Geographic China successfully launched its Chang'e-5 mission on Monday whereby it is sending a spacecraft to the Moon to collect rock samples. If everything goes according to plan, the lander portion of the spacecraft will touch down on the lunar surface by the end of this week and will have approximately 14 days—or the length of a single day on the satellite—to collect the samples and bring them back to Earth.
The spacecraft took off from the Wenchang space site at Hainan Island in China on Monday. Unlike previous missions, China was open about live-streaming and consistently sharing information about the launch procedures. The entire event was live-streamed by Chinese state media without any delay, showing the growing confidence that the nation has in its space program.
The mission is being hailed as the most ambitious program in China's space history. Not only will it be the first attempt at collecting lunar rock samples in over forty years, but it also sets the nation on course to become only the third country to bring pieces of the moon back to Earth, joining the ranks of the U.S. and Soviet Russia who each completed this feat with the Apollo Missions and the Luna robotic landings, respectively.
China plans to land Chang'e-5 on the Mons Rümker, which is an isolated volcanic formation that is located in the northwest part of the Moon's near side. It's also much younger than the craters that the Apollo astronauts visited. Once there, the spacecraft is slated to retrieve more than four pounds of lunar samples. For contrast, the three successful Soviet Luna missions brought close to 0.625 pounds while NASA’s Apollo astronauts ferried 842 pounds of moon rock and soil back to the Earth.
From liftoff to touchdown back to Earth, the entire mission is scheduled to take less than a month. China hopes that the successful completion of Chang’e-5 will be a stepping stone towards establishing an international lunar research station before colonizing the moon by the next decade.
Source: The New York Times via Engadget
By Ather Fawaz
QCE20: Here's what you can expect from Intel's new quantum computing research this week
by Ather Fawaz
The IEEE Quantum Week (QCE20) is a conference where academics, newcomers, and enthusiasts alike come together to discuss new developments and challenges in the field of quantum computing and engineering. Due to COVID-19 restrictions, this year's conference will be held virtually, starting today and running till October 16.
Throughout the course of the event, QCE20 will host parallel tracks of workshops, tutorials, keynotes, and networking sessions by industry front-runners like Intel, Microsoft, IBM, and Zapata. From the pack, today we’ll peek into what Intel has in store for the IEEE Quantum Week. Particularly, we’ll be previewing Intel’s array of new papers on developing commercial-grade quantum systems.
Image via Intel Designing high-fidelity multi-qubit gates using deep reinforcement learning
Starting off, Intel will be presenting a paper in which researchers have employed a deep learning framework to simulate and design high-fidelity multi-qubit gates for quantum dot qubit systems. This research is interesting because quantum dot silicon qubits can potentially improve the scalability of quantum computers due to their small size. This paper also indicates that machine learning is a powerful technique in optimizing the design and implementation of quantum gates. A similar insight was used by another team at the University of Melbourne back in March in which the researchers used machine learning to pinpoint the spatial locations of phosphorus atoms in a silicon lattice to design better quantum chips and subsequently reduce errors in computations.
Efficient quantum circuits for accurate state preparation of smooth, differentiable functions
Next up, Intel's second paper proposes an algorithm that optimizes the loading of certain classes of functions, e.g. Gaussian and Probability distributions, which are frequently used for mapping real-world problems to quantum computers. By loading data faster in a quantum computer and increasing throughput, the researchers believe that we can save time and leverage the exponential compute power offered by quantum computers in practical applications.
Image via Intel On connectivity-dependent resource requirements for digital quantum simulation of d-level particles
One of the earliest and most useful applications of quantum computers is to simulate a quantum system of particles. Consider the scenario where the ground state of a particle is to be calculated to study a certain chemical process. Traditionally, this task usually involves obtaining the lowest eigenvalue from the corresponding eigenvectors of the states of a particle represented by a matrix known as the Hamiltonian. But this deceptively simple task grows exponentially for larger systems that have innumerable particles. Naturally, researchers have devised quantum algorithms for it. Intel’s paper highlights the development and research requirements of running such algorithms on small qubit systems. The firm believes that the insight garnered from these findings can have potential implications for designing qubit chips in the future while simultaneously making quantum computing more accessible.
A BIKE accelerator for post-quantum cryptography
While we’re still in the NISQ (Noisy Intermediate-Scale Quantum) era of quantum computers, meaning that perfect quantum computers with thousands of qubits running Shor’s algorithm are still a thing of the future, firms have already started preparing for a ‘quantum-safe’ future. One of the foreseeable threats posed by quantum computers is the ease with which they can factor large numbers, and hence threaten to break our existing standards of encryption. In this paper, researchers at Intel have aimed to address this concern. By presenting a design for a BIKE (Bit-flipping Key Encapsulation) hardware accelerator, today’s cryptosystems can be made resilient to quantum attacks. Another thing to note here is that this approach is also currently under consideration by the National Institute of Standards and Technology (NIST), so a degree of adoption and standardization might be on the cards in the future.
Engineering the cost function of a variational quantum algorithm for implementation on near-term devices
Addressing the prevalent issues of the NISQ era once again, this paper debuts a novel technique that helps quantum-classical hybrid algorithms run efficiently on small qubit systems. This technique can be handy in this era since most practical uses of quantum computers involve a hybrid setup in which a quantum computer is paired with a classical computer. To illustrate, the aforementioned problem of finding the ground state of a quantum particle can be solved by a Variational-Quantum-Eigensolver (VQE), which uses both classical and quantum algorithms to estimate the lowest eigenvalue of a Hamiltonian. But running such hybrid algorithms is difficult. But the new method to engineer cost functions outlined in this paper could allow small qubit systems to run these algorithms efficiently.
Image via Intel Finally, on the penultimate day of the conference, Dr. Anne Matsuura, the Director of Quantum Applications and Architecture at Intel Labs, will be delivering a keynote titled “Quantum Computing: A Scalable, Systems Approach”. In it, Dr. Matsuura will be underscoring Intel’s strategy of taking a systems-oriented, workload-driven view of quantum computing to commercialize quantum computers in the NISQ era:
The research works outlined above accentuate Intel’s efforts to develop useful applications that are ready to run on near-term, smaller qubit quantum machines. They also put the tech giant alongside the ranks of IBM and Zapata that are working on the commercialization of quantum computers as well.
By Ather Fawaz
Researchers probe into RNA using deep learning to develop sensors for a COVID-19 diagnostic
by Ather Fawaz
A genome is a genetic blueprint that determines an organism's characteristics. Deoxyribonucleic acid (DNA), and usually in the case of viruses, Ribonucleic acid (RNA) are the building blocks of genomic sequences. And manipulating these nucleic acids directly can lead to tangible changes in the organism.
As such, developments in genetic engineering focus on our ability to manipulate genomic sequences. But this is a daunting task. For example, precisely controlling a specific class of engineered RNA molecules called "toehold switches" can lend vital insight into cellular environments and potential diseases. However, previous experiments have shown that toehold switches are not tractable, many don't respond to modifications even though they have been engineered to produce the desired output in response to a given input based on known RNA folding rules.
Considering this, two teams of researchers from the Wyss Institute at Harvard University and MIT have developed a set of machine learning algorithms that can improve this process. Specifically, they used deep learning to analyze a large volume of toehold switch sequences to accurately predict which toeholds perform their intended tasks reliably thereby allowing researchers to identify high-quality toeholds for their experiments. Their findings have been published in Nature in two separate papers today.
With any machine learning problem, the first step is to collect domain-specific data to train the model on. The researchers collected a large dataset composed of toehold switch sequences. Alex Garruss, co-first author and a graduate student working at the Wyss stated:
Since there were two separate teams, the researchers tried their hands with two different techniques to approach the problem. The authors of the first paper decided to analyze toehold switches not as sequences of bases, but as 2D images of base-pair possibilities. This approach, called Visualizing Secondary Structure Saliency Maps, or VIS4Map, successfully identified physical elements of the toehold switches that influenced their performance, providing insight into RNA folding mechanisms that had not been discovered using traditional analysis techniques.
After generating a data set of thousands of toehold switches, one team used a computer vision-based algorithm to analyze the toehold sequences as two-dimensional images, while the other team used natural language processing to interpret the sequences as "words" written in the "language" of RNA. Image via Wyss Institute at Harvard University Authors of the second paper created two different deep learning architectures that approached the challenge of identifying 'susceptible' toehold switches using orthogonal techniques. The first model was based on convolutional neural network (CNN) and multi-layer perceptron (MLP), that treated the toehold sequences as 1D images, or lines of nucleotide bases. Using an optimization technique called Sequence-based Toehold Optimization and Redesign Model (STORM), it identified patterns of bases and potential interactions between those bases to mark the toeholds of interest.
The second architecture modeled the problem to the domain of natural language processing (NLP), treating each toehold sequence as a phrase consisting of patterns of words. The task was then to train a model to combine these words, or nucleotide bases, to make a coherent phrase. This model was integrated with the CNN-based model to create Nucleic Acid Speech (NuSpeak). This optimization technique redesigned the last nine nucleotides of a given toehold switch while keeping the remaining 21 nucleotides intact. This allowed for the creation of specialized toeholds that detect the presence of specific pathogenic RNA sequences and could be used to develop new diagnostic tests.
By using both models sequentially, the researchers were able to predict which toehold sequences would produce high-quality sensors. Image via Wyss Institute at Harvard University To test both models, the researchers sensed fragments from SARS-CoV-2, the viral genome that causes COVID-19, using their optimized toehold switches. NuSpeak improved the sensors' performance by an average of 160%. On the other hand, STORM created better versions of four SARS-CoV-2 viral RNA sensors, improving their performance by up to 28 times. Apropos these impressive results, co-first author of the second paper, Katie Collins an MIT student at the Wyss Institute, stated:
Diogo Camacho, a corresponding author of the second paper and a Senior Bioinformatics Scientist and co-lead of the Predictive BioAnalytics Initiative at the Wyss Institute stated:
Moving forward, as Camacho envisioned, the teams are looking to generalize their algorithms to map them onto other problems in synthetic biology to potentially accelerate the development of biotechnology tools.