Can we mitigate two stressors (depressed economic circumstances and climate change) by leveraging solutions in one towards the other?
Let's consider the following assumptions:
Here are some of the Energy Watch Group key findings:
The estimated global spending needed by 2050 on renewable sources and electrification is $110 trillion (2% of global GDP during that period) of which $95 trillion is already committed. 100% renewable will employ 35 million worldwide (from 9.8 million today) [Source: Irena]
The questions we need to address:
A latent majority (61% in the U.S. [ABC News/Stanford 2018]) supports climate change mitigation, but the size of the committed minority (e.g. Greta Thunberg school strikes, Green New Deal proponents) needs to cross a critical threshold to have an marked effect on policy makers. Are socioeconomically depressed areas the right targets to reach a tipping point? Do we have access to economically feasible technologies? How do we amplify success and develop a band-wagon effect with positive feedback?
Answers to these questions require an integrated, multidisciplinary approach: engineering for technology selections and complex systems analysis, from design experts for packaging, logos, and instruction manuals, business/finance for financial modeling, social/political sciences for community approach programming, and education specialists for education modeling.
A fascinating and worthwhile project.
Two recent articles from Symmetry Magazine (published jointly by SLAC and Fermilab) “Neural Networks Meet Space” and Physics Today (American Institute of Physics) “A Deep Neural Network Of Light,” read one after the other, provide a perspective on current machine learning developments that point to major advances on how we will be able mine data from very large data sets several orders of magnitude faster than traditionally and an additional two order of magnitude faster than with conventional electronics.
Neural Networks Meet Space relates the extraordinary research done by Yashar Hezaveh, Laurence Perreault Levasseur and Phil Marshall at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford/SLAC, recently published in Nature, in which strong gravitational lenses are analyzed using a convolutional neural network. Gravitational lenses are complex distortions of spacetime—predicted by Einstein—produced by the gravity of foreground massive galaxies or galaxy clusters that affect the path of the light reaching us from background galaxies. These distortions allow astrophysicists to quantify, and develop a history of, the dark matter that makes up 85% of matter in the universe and the dark energy driving the acceleration of its expansion.
Traditionally, such analyses were done by comparing computer-intensive mathematical simulation lensing models with actual images, which could take weeks to months to perform. Using a neural network, however, allows the same analysis to be done in seconds, once the network has been “trained” for one day – by presenting about half a million telescope images of gravitational lenses to the system.
Remarkably, in addition to being able to automatically identify a strong gravitational lens, the neural network was able to elucidate the property of each lens (mass distribution and magnification of the background object.)
As the article explains “Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information. In the artificial version, the ‘neurons’ are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.”
And now, it seems that another advance could make this type of work even more efficient!
As related in AIP’s Physics Today, Marin Soljačić, Dirk Englund (both at MIT), and colleagues developed a proof-of-concept photonics circuit to perform the operations underlying neural networks that may offer two orders of magnitude faster operations, as compared to using their traditional electronics counterparts.
As Naisbitt said in Megatrend “We are drowning in information but starved for knowledge.” Given the remarkable advances outlined above, we may yet be able to develop solutions towards the ingestion and useful metabolism of the coming deluge.