Recently Browsing 0 members
No registered users viewing this page.
Ericsson and MIT begin research on next-gen 5G and 6G networks
by Paul Hill
Ericsson has announced that it’s teaming up with MIT to work on two research projects that will help to build new network infrastructure to deliver hardware that could power next-generation 5G and 6G mobile networks. Specifically, they will work on lithionic chips to enable neuromorphic computing.
With the new lithionic chips to power neuromorphic computing, fully cognitive AI processing could be performed with reduced operational complexity and energy consumption compared to today. Not only does this mean improved network performance but mobile operators around the world will eventually be able to cut down their energy use.
The partners will also research mobile networks that connect to trillions of sensors and other zero-energy devices and power them using a radio signal. Powering zero-energy devices with just a radio signal has been called a significant technology challenge by Ericsson but it opens up a lot of possibilities on the smart city-front.
Discussing their work, Magnus Frodigh, Head of Ericsson Research, said:
The pair did not give a deadline for their research but it’ll definitely be interesting to see if they succeed in their goals. Companies around the world have already started work on 6G mobile networks which are expected to begin being deployed in 2030.
By Ather Fawaz
Researchers probe into RNA using deep learning to develop sensors for a COVID-19 diagnostic
by Ather Fawaz
A genome is a genetic blueprint that determines an organism's characteristics. Deoxyribonucleic acid (DNA), and usually in the case of viruses, Ribonucleic acid (RNA) are the building blocks of genomic sequences. And manipulating these nucleic acids directly can lead to tangible changes in the organism.
As such, developments in genetic engineering focus on our ability to manipulate genomic sequences. But this is a daunting task. For example, precisely controlling a specific class of engineered RNA molecules called "toehold switches" can lend vital insight into cellular environments and potential diseases. However, previous experiments have shown that toehold switches are not tractable, many don't respond to modifications even though they have been engineered to produce the desired output in response to a given input based on known RNA folding rules.
Considering this, two teams of researchers from the Wyss Institute at Harvard University and MIT have developed a set of machine learning algorithms that can improve this process. Specifically, they used deep learning to analyze a large volume of toehold switch sequences to accurately predict which toeholds perform their intended tasks reliably thereby allowing researchers to identify high-quality toeholds for their experiments. Their findings have been published in Nature in two separate papers today.
With any machine learning problem, the first step is to collect domain-specific data to train the model on. The researchers collected a large dataset composed of toehold switch sequences. Alex Garruss, co-first author and a graduate student working at the Wyss stated:
Since there were two separate teams, the researchers tried their hands with two different techniques to approach the problem. The authors of the first paper decided to analyze toehold switches not as sequences of bases, but as 2D images of base-pair possibilities. This approach, called Visualizing Secondary Structure Saliency Maps, or VIS4Map, successfully identified physical elements of the toehold switches that influenced their performance, providing insight into RNA folding mechanisms that had not been discovered using traditional analysis techniques.
After generating a data set of thousands of toehold switches, one team used a computer vision-based algorithm to analyze the toehold sequences as two-dimensional images, while the other team used natural language processing to interpret the sequences as "words" written in the "language" of RNA. Image via Wyss Institute at Harvard University Authors of the second paper created two different deep learning architectures that approached the challenge of identifying 'susceptible' toehold switches using orthogonal techniques. The first model was based on convolutional neural network (CNN) and multi-layer perceptron (MLP), that treated the toehold sequences as 1D images, or lines of nucleotide bases. Using an optimization technique called Sequence-based Toehold Optimization and Redesign Model (STORM), it identified patterns of bases and potential interactions between those bases to mark the toeholds of interest.
The second architecture modeled the problem to the domain of natural language processing (NLP), treating each toehold sequence as a phrase consisting of patterns of words. The task was then to train a model to combine these words, or nucleotide bases, to make a coherent phrase. This model was integrated with the CNN-based model to create Nucleic Acid Speech (NuSpeak). This optimization technique redesigned the last nine nucleotides of a given toehold switch while keeping the remaining 21 nucleotides intact. This allowed for the creation of specialized toeholds that detect the presence of specific pathogenic RNA sequences and could be used to develop new diagnostic tests.
By using both models sequentially, the researchers were able to predict which toehold sequences would produce high-quality sensors. Image via Wyss Institute at Harvard University To test both models, the researchers sensed fragments from SARS-CoV-2, the viral genome that causes COVID-19, using their optimized toehold switches. NuSpeak improved the sensors' performance by an average of 160%. On the other hand, STORM created better versions of four SARS-CoV-2 viral RNA sensors, improving their performance by up to 28 times. Apropos these impressive results, co-first author of the second paper, Katie Collins an MIT student at the Wyss Institute, stated:
Diogo Camacho, a corresponding author of the second paper and a Senior Bioinformatics Scientist and co-lead of the Predictive BioAnalytics Initiative at the Wyss Institute stated:
Moving forward, as Camacho envisioned, the teams are looking to generalize their algorithms to map them onto other problems in synthetic biology to potentially accelerate the development of biotechnology tools.
By Ather Fawaz
CERN develops 3D-printed plastic scintillators for neutrino detectors
by Ather Fawaz
Image via CERN Neutrinos are perhaps one of the most elusive yet ubiquitous particles around us. Researchers at CERN have invested heavily in detecting these ghastly particles with the T2K experiment, which is a leading neutrino oscillation experiment in Japan.
However, scientists are looking to upgrade the experiment’s detector to yield more precise results. Plastic scintillators are frequently employed in such neutrino oscillation experiments, where they reconstruct the final state of the neutrino interaction. The upgraded detector requires a two-tonne polystyrene-based plastic scintillator detector that is segmented into 1 cm^3 cubes. These small cubes yield precise results but require finer granularity which ultimately makes the detector assembly harder.
With this trade-off in mind, the CERN EP-Neutrino group in collaboration with the Institute for Scintillation Materials (ISMA) of the National Academy of Science of Ukraine developed a new plastic scintillator production technique that involves additive manufacturing. Jargon aside, the solution involves 3D-printing a single gargantuan block of scintillator containing many optically independent cubes.
The preliminary test runs of the 3D-printed cube have shown promising results thus far and demonstrate the proof of concept.
However, CERN noted that the complete adoption of these 3D-printed scintillators requires fine-tuning of the 3D-printer configuration and further optimization of the scintillator parameters before the light reflector material for optically isolating the cubes can be developed. Nevertheless, the team noted that the technique is worth exploring. This is due to the fact that 3D-printed plastic scintillators are not only robust and cost-effective but their potential applications extend beyond the domain of high energy physics to fields like cancer therapy where particle detectors are often used.
By Ather Fawaz
Neural networks are now being used to track exotic particles at CERN
by Ather Fawaz
Image via CERN Research within the domain of physics has profited from the rise of artificial neural networks and deep learning. In the past, we've seen them being applied to study dark matter and massive galaxies. Continuing this pattern, we now have artificial neural networks being used in the study of exotic particles.
At the Compact Muon Solenoid (CMS), which is a particle detector built on the Large Hadron Collider (LHC) at CERN, researchers are using neural networks to identify atypical experimental signatures resulting from proton–proton collisions inside the LHC.
These experimental signatures are hard to track for traditional algorithms as most of the 'debris' generated by a collision is short-lived. But neural networks can prove to be potent in this situation. This is due to the fact that they can be trained on real-world data.
CMS' neural network has been trained with such data and will soon be in a position to detect the experimental signatures automatically. For training, the researchers have used domain adaptation by backward propagation to improve the simulation modeling of the jet class probability distributions observed in collision data.
The model has shown promising results thus far. During the analysis of a particle track where the probability of correctly identifying a jet from a long-lived particle was 50%, the model misidentified only one regular jet in every thousand and demonstrated a low count of false negatives and false positives.
CERN believes that the new system will help advance the firm's quest for finding ephemeral and exotic particles. For more information, you may study the paper published on arXiv.
By Ather Fawaz
TextFooler tricks venerable NLP models like BERT into making wrong predictions
by Ather Fawaz
While natural language processing models (NLP) have picked up pace in recent years, they are not quite there yet. And now, a team of researchers at MIT have created a framework that illustrates one instance of this.
The program, dubbed TextFooler, attacks NLP models by changing specific parts of a given sentence to 'fool' them into making the wrong predictions. It has two principal components—text classification and entailment. It modifies the classification to invalidate the entailment judgment of the target NLP models.
Jargon aside, the program swaps the most important words with synonyms within a given input to modify how the models interpret the sentence as a whole. While these synonyms might be common to us and the sentence would still have similar semantics, they made the targeted models interpret the sentences differently.
One example is as follows:
To evaluate TextFooler, the researchers used three criteria. First, changing the model's prediction for classification or entailment. Second, whether it seemed equivalent in meaning to a human reader, compared with the original example. And third, whether the text output text looked natural enough.
The framework successfully attacked three well-known NLP models, including BERT. Interestingly, by changing only 10 percent of the input sentence, TextFooler brought down models exhibiting accuracies of over 90 percent to under 20 percent.
All in all, the team behind TextFooler commented that their research was undertaken with the hope of exposing the vulnerabilities of current NLP systems to make them more secure and robust in the future. They hope that TextFooler will help generalize the current and upcoming models to new, unseen data. The researchers plan on presenting their work at the AAAI Conference on Artificial Intelligence in New York.