Two recent articles from Symmetry Magazine (published jointly by SLAC and Fermilab) “Neural Networks Meet Space” and Physics Today (American Institute of Physics) “A Deep Neural Network Of Light,” read one after the other, provide a perspective on current machine learning developments that point to major advances on how we will be able mine data from very large data sets several orders of magnitude faster than traditionally and an additional two order of magnitude faster than with conventional electronics.
Neural Networks Meet Space relates the extraordinary research done by Yashar Hezaveh, Laurence Perreault Levasseur and Phil Marshall at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford/SLAC, recently published in Nature, in which strong gravitational lenses are analyzed using a convolutional neural network. Gravitational lenses are complex distortions of spacetime—predicted by Einstein—produced by the gravity of foreground massive galaxies or galaxy clusters that affect the path of the light reaching us from background galaxies. These distortions allow astrophysicists to quantify, and develop a history of, the dark matter that makes up 85% of matter in the universe and the dark energy driving the acceleration of its expansion.
Traditionally, such analyses were done by comparing computer-intensive mathematical simulation lensing models with actual images, which could take weeks to months to perform. Using a neural network, however, allows the same analysis to be done in seconds, once the network has been “trained” for one day – by presenting about half a million telescope images of gravitational lenses to the system.
Remarkably, in addition to being able to automatically identify a strong gravitational lens, the neural network was able to elucidate the property of each lens (mass distribution and magnification of the background object.)
As the article explains “Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information. In the artificial version, the ‘neurons’ are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.”
And now, it seems that another advance could make this type of work even more efficient!
As related in AIP’s Physics Today, Marin Soljačić, Dirk Englund (both at MIT), and colleagues developed a proof-of-concept photonics circuit to perform the operations underlying neural networks that may offer two orders of magnitude faster operations, as compared to using their traditional electronics counterparts.
As Naisbitt said in Megatrend “We are drowning in information but starved for knowledge.” Given the remarkable advances outlined above, we may yet be able to develop solutions towards the ingestion and useful metabolism of the coming deluge.