In the paper titled "Machine Learning for Precipitation Nowcasting from Radar Images", researchers at Google AI have employed a CNN to give a short-term prediction for precipitation. And the results seem promising, and according to Google, outperform traditional methods:
This precipitation nowcasting, which focuses on 0-6 hour forecasts, can generate forecasts that have a 1km resolution with a total latency of just 5-10 minutes, including data collection delays, outperforming traditional models, even at these early stages of development.
Unlike traditional methods, which incorporate a priori knowledge of how the atmosphere works, the researchers used what they are calling a 'physics-free' approach that interprets the problem of weather prediction as solely an image-to-image translation problem. As such, the trained CNN by the team—a U-Net—only approximates atmospheric physics from the training examples provided to it.
For training the U-Net, multispectral satellite images were used. Data collected over the continental US from the year 2017 to 2019 was used for the initial training. Specifically, the data was split into chunks of four weeks where the last week was used as the evaluation dataset while the rest of the weeks were used for the training dataset.
In comparison to traditional, venerable nowcasting methods, which include High Resolution Rapid Refresh (HRRR) numerical forecast, an optical flow (OF) algorithm, and the persistence model, Google AI's model outperformed all three. Using precision and recall graphs, the quality of nowcasting was shown to be better on the U-Net model.
As can be seen, the quality of our neural network forecast outperforms all three of these models (since the blue line is above all of the other model’s results). It is important to note, however, that the HRRR model begins to outperform our current results when the prediction horizon reaches roughly 5 to 6 hours.
Moreover, the model provides instantaneous predictions. This is an added advantage because the traditional methods like HRRR harbor a computational latency of 1-3 hours. This allows the machine learning model to work on fresh data. Having said that, the numerical model used in HRRR has not entirely been superseded by it.
In contrast, the numerical model used in HRRR can make better long term predictions, in part because it uses a full 3D physical model — cloud formation is harder to observe from 2D images, and so it is harder for ML methods to learn convective processes.
Google envisions that it might be fruitful to combine the two methods, HRRR and the machine learning model for having accurate and quick short-term as well as long-term forecasts. According to the firm, they are also looking at applying ML directly to 3D observations in the future.
If you are interested in finding out more, you may refer to the paper published on arXiv here.