Recently Browsing 0 members
No registered users viewing this page.
By Ather Fawaz
"Mars, here we come!!" exclaims Elon Musk despite explosive ending to Starship's test flight
by Ather Fawaz
Image via Trevor Mahlmann (YouTube) The Starship initiative by SpaceX is meant to make spaceflights to Mars a reality. After a scrubbed launch yesterday courtesy of an auto-abort procedure in the Starship's Raptor engines, once again, SpaceX geared up for a re-run of the test a few hours back. This time, Starship SN8 successfully took flight from its test site in Boca Chica, Texas. A trimmed version of the complete event is embedded below from Trevor Mahlmann's YouTube channel.
Compared to the scrubbed launch, things went better on this one, but not entirely. The gargantuan 160-feet tall rocket, propelled by three Raptor engines, took flight, and intended to rise to a height of 41,000 ft (12,500 m). SpaceX founder Elon Musk called the ascent a success, but it's not clear whether the rocket reached its intended altitude. Nevertheless, after reaching its highest point, the rocket began its journey back to its earthly test site.
Image via Trevor Mahlmann (YouTube) The SN8 prototype performed a spectacular mid-air flipping maneuver to set itself on course to land vertically back to the earth—a feat we've all grown accustomed to seeing with SpaceX's Falcon 9 rocket. The SN8 executed the landing flip successfully, and SpaceX tweeted a closer look at the event as it happened. Impressively, SpaceX claimed that by doing so, the SN8 became the largest spacecraft to perform a landing maneuver of this sort.
But as the rocket prepared to touch down and its boosters tried to slow down its descent to cushion the landing, the rocket's fuel header tank pressure got low. This caused the "touchdown velocity to be high & RUD," during the landing burn, Musk tweeted. Unfortunately, this meant that upon touchdown, the Starship SN8 prototype exploded into flames.
Image via SpaceX Livestream Notwithstanding the fiery, unfortunate event right at the final few moments, SpaceX and Musk hailed the test as a success. For the company, "SN8 did great! Even reaching apogee would’ve been great, so controlling all way to putting the crater in the right spot was epic!!" Musk tweeted, "We got all the data we needed. Congrats SpaceX team hell yeah!!", he continued; before following up with another tweet exclaiming "Mars, here we come!!"
By Ather Fawaz
Intel shows promising progress and key advances in integrated photonics for data centers
by Ather Fawaz
Image via Intel Press Kit The effective management, control, and scaling of electrical input/output (I/O) are crucial in data centers today. Innovative ideas like Microsoft's Project Natick, which submerged a complete data center underwater, and optical computing and photonics, which aim to use light as a basic energy source in a device and for transferring information.
Building on this, at the Intel Labs Day 2020 conference today, Intel highlighted key advances in the fundamental technology building blocks that are a linchpin to the firm's integrated photonics research. These building blocks include light generation, amplification, detection, modulation, complementary metal-oxide-semiconductor (CMOS), all of which are essential to achieve integrated photonics.
Among the first noteworthy updates, Intel showed off a prototype that featured tight coupling of photonics and CMOS technologies. This served as a proof-of-concept of future full integration of optical photonics with core compute silicon. Intel also highlighted micro-ring modulators that are 1000x smaller than contemporary components found in electronic devices today. This is particularly significant as the size and cost of conventional silicon modulators have been a substantial barrier to bringing optical technology onto server packages, which require the integration of hundreds of these devices.
The key developments can be summarized as follows:
These results point towards the extended use of silicon photonics beyond the upper layers of the network and onto future server packages. The firm also believes that it paves a path towards integrating photonics with low-cost, high-volume silicon, which can eventually power our data centers and networks with high-speed, low-latency links.
Image via Intel Press Kit “We are approaching an I/O power wall and an I/O bandwidth gap that will dramatically hinder performance scaling", said James Jaussi, who is the Senior Principal Engineer and Director of the PHY Lab at Intel Labs. He signaled that the firm's "research on tightly integrating photonics with CMOS silicon can systematically eliminate barriers across cost, power, and size constraints to bring the transformative power of optical interconnects to server packages.”
By Ather Fawaz
Intel Labs Day 2020: Robotics demonstrations and a next-gen neuromorphic chip on the horizon
by Ather Fawaz
Loihi, Intel’s neuromorphic research chip. Image via Intel Press Kit Neuromorphic computing, as the name implies, aims to emulate the human brain's neural structure for computation. It's a relatively recent idea and one of the radical takes on contemporary computer architectures today. Work on it has been gaining traction, and promising results have come up; as recently as June this year, a neuromorphic device was used to recreate a gray-scale image of Captain America’s shield.
Alongside other notable announcements at Intel Labs Day 2020, the firm also gave us an update on the progress with its Intel Neuromorphic Research Community (INRC). The aim of the INRC is to expand the applications of neuromorphic computing in business use cases. This consortium, which originally came together in 2018 and includes some Fortune 500 and government members, has now been expanded to over 100 companies and academics with new additions like Lenovo, Logitech, Mercedes-Benz, and Prophesee. Moreover, Intel also highlighted some research results coming out of the INRC computed on the company’s neuromorphic research test chip, Loihi, at the virtual conference.
Intel Nahuku board, each of which contains 8 to 32 Intel Loihi neuromorphic chips. Image via Intel Press Kit Researchers showcased two state-of-the-art neuromorphic robotics demonstrations. In the first demonstration by Intel and ETH Zurich, Loihi was seen adaptively controlling a horizon-tracking drone platform. It achieved closed-loop speeds up to 20kHz with 200µs of visual processing latency, a 1,000x gain in combined efficiency and speed compared to traditional solutions. In the second demonstration, the Italian Institute of Technology and Intel showed the operation of multiple cognitive functions like object recognition, spatial awareness, and real-time decision-making, all running together on Loihi in IIT’s iCub robot platform.
Other updates highlighted in the conference include:
Moving forward, Intel will be integrating the takeaways accrued from experiments over the last couple of years into the development of the second generation of its Loihi neuromorphic chip. While the technical details of the next-gen chip are still nebulous, Intel says that it is on the horizon and "will be coming soon".
By Ather Fawaz
China launches Chang'e-5 mission to extract and bring lunar rock samples to Earth
by Ather Fawaz
Image via National Geographic China successfully launched its Chang'e-5 mission on Monday whereby it is sending a spacecraft to the Moon to collect rock samples. If everything goes according to plan, the lander portion of the spacecraft will touch down on the lunar surface by the end of this week and will have approximately 14 days—or the length of a single day on the satellite—to collect the samples and bring them back to Earth.
The spacecraft took off from the Wenchang space site at Hainan Island in China on Monday. Unlike previous missions, China was open about live-streaming and consistently sharing information about the launch procedures. The entire event was live-streamed by Chinese state media without any delay, showing the growing confidence that the nation has in its space program.
The mission is being hailed as the most ambitious program in China's space history. Not only will it be the first attempt at collecting lunar rock samples in over forty years, but it also sets the nation on course to become only the third country to bring pieces of the moon back to Earth, joining the ranks of the U.S. and Soviet Russia who each completed this feat with the Apollo Missions and the Luna robotic landings, respectively.
China plans to land Chang'e-5 on the Mons Rümker, which is an isolated volcanic formation that is located in the northwest part of the Moon's near side. It's also much younger than the craters that the Apollo astronauts visited. Once there, the spacecraft is slated to retrieve more than four pounds of lunar samples. For contrast, the three successful Soviet Luna missions brought close to 0.625 pounds while NASA’s Apollo astronauts ferried 842 pounds of moon rock and soil back to the Earth.
From liftoff to touchdown back to Earth, the entire mission is scheduled to take less than a month. China hopes that the successful completion of Chang’e-5 will be a stepping stone towards establishing an international lunar research station before colonizing the moon by the next decade.
Source: The New York Times via Engadget
By Ather Fawaz
QCE20: Here's what you can expect from Intel's new quantum computing research this week
by Ather Fawaz
The IEEE Quantum Week (QCE20) is a conference where academics, newcomers, and enthusiasts alike come together to discuss new developments and challenges in the field of quantum computing and engineering. Due to COVID-19 restrictions, this year's conference will be held virtually, starting today and running till October 16.
Throughout the course of the event, QCE20 will host parallel tracks of workshops, tutorials, keynotes, and networking sessions by industry front-runners like Intel, Microsoft, IBM, and Zapata. From the pack, today we’ll peek into what Intel has in store for the IEEE Quantum Week. Particularly, we’ll be previewing Intel’s array of new papers on developing commercial-grade quantum systems.
Image via Intel Designing high-fidelity multi-qubit gates using deep reinforcement learning
Starting off, Intel will be presenting a paper in which researchers have employed a deep learning framework to simulate and design high-fidelity multi-qubit gates for quantum dot qubit systems. This research is interesting because quantum dot silicon qubits can potentially improve the scalability of quantum computers due to their small size. This paper also indicates that machine learning is a powerful technique in optimizing the design and implementation of quantum gates. A similar insight was used by another team at the University of Melbourne back in March in which the researchers used machine learning to pinpoint the spatial locations of phosphorus atoms in a silicon lattice to design better quantum chips and subsequently reduce errors in computations.
Efficient quantum circuits for accurate state preparation of smooth, differentiable functions
Next up, Intel's second paper proposes an algorithm that optimizes the loading of certain classes of functions, e.g. Gaussian and Probability distributions, which are frequently used for mapping real-world problems to quantum computers. By loading data faster in a quantum computer and increasing throughput, the researchers believe that we can save time and leverage the exponential compute power offered by quantum computers in practical applications.
Image via Intel On connectivity-dependent resource requirements for digital quantum simulation of d-level particles
One of the earliest and most useful applications of quantum computers is to simulate a quantum system of particles. Consider the scenario where the ground state of a particle is to be calculated to study a certain chemical process. Traditionally, this task usually involves obtaining the lowest eigenvalue from the corresponding eigenvectors of the states of a particle represented by a matrix known as the Hamiltonian. But this deceptively simple task grows exponentially for larger systems that have innumerable particles. Naturally, researchers have devised quantum algorithms for it. Intel’s paper highlights the development and research requirements of running such algorithms on small qubit systems. The firm believes that the insight garnered from these findings can have potential implications for designing qubit chips in the future while simultaneously making quantum computing more accessible.
A BIKE accelerator for post-quantum cryptography
While we’re still in the NISQ (Noisy Intermediate-Scale Quantum) era of quantum computers, meaning that perfect quantum computers with thousands of qubits running Shor’s algorithm are still a thing of the future, firms have already started preparing for a ‘quantum-safe’ future. One of the foreseeable threats posed by quantum computers is the ease with which they can factor large numbers, and hence threaten to break our existing standards of encryption. In this paper, researchers at Intel have aimed to address this concern. By presenting a design for a BIKE (Bit-flipping Key Encapsulation) hardware accelerator, today’s cryptosystems can be made resilient to quantum attacks. Another thing to note here is that this approach is also currently under consideration by the National Institute of Standards and Technology (NIST), so a degree of adoption and standardization might be on the cards in the future.
Engineering the cost function of a variational quantum algorithm for implementation on near-term devices
Addressing the prevalent issues of the NISQ era once again, this paper debuts a novel technique that helps quantum-classical hybrid algorithms run efficiently on small qubit systems. This technique can be handy in this era since most practical uses of quantum computers involve a hybrid setup in which a quantum computer is paired with a classical computer. To illustrate, the aforementioned problem of finding the ground state of a quantum particle can be solved by a Variational-Quantum-Eigensolver (VQE), which uses both classical and quantum algorithms to estimate the lowest eigenvalue of a Hamiltonian. But running such hybrid algorithms is difficult. But the new method to engineer cost functions outlined in this paper could allow small qubit systems to run these algorithms efficiently.
Image via Intel Finally, on the penultimate day of the conference, Dr. Anne Matsuura, the Director of Quantum Applications and Architecture at Intel Labs, will be delivering a keynote titled “Quantum Computing: A Scalable, Systems Approach”. In it, Dr. Matsuura will be underscoring Intel’s strategy of taking a systems-oriented, workload-driven view of quantum computing to commercialize quantum computers in the NISQ era:
The research works outlined above accentuate Intel’s efforts to develop useful applications that are ready to run on near-term, smaller qubit quantum machines. They also put the tech giant alongside the ranks of IBM and Zapata that are working on the commercialization of quantum computers as well.