Can we mitigate two stressors (depressed economic circumstances and climate change) by leveraging solutions in one towards the other?
Let's consider the following assumptions:
Here are some of the Energy Watch Group key findings:
The estimated global spending needed by 2050 on renewable sources and electrification is $110 trillion (2% of global GDP during that period) of which $95 trillion is already committed. 100% renewable will employ 35 million worldwide (from 9.8 million today) [Source: Irena] The questions we need to address:
A latent majority (61% in the U.S. [ABC News/Stanford 2018]) supports climate change mitigation, but the size of the committed minority (e.g. Greta Thunberg school strikes, Green New Deal proponents) needs to cross a critical threshold to have an marked effect on policy makers. Are socioeconomically depressed areas the right targets to reach a tipping point? Do we have access to economically feasible technologies? How do we amplify success and develop a band-wagon effect with positive feedback? Answers to these questions require an integrated, multidisciplinary approach: engineering for technology selections and complex systems analysis, from design experts for packaging, logos, and instruction manuals, business/finance for financial modeling, social/political sciences for community approach programming, and education specialists for education modeling. A fascinating and worthwhile project. ![]() Two recent articles from Symmetry Magazine (published jointly by SLAC and Fermilab) “Neural Networks Meet Space” and Physics Today (American Institute of Physics) “A Deep Neural Network Of Light,” read one after the other, provide a perspective on current machine learning developments that point to major advances on how we will be able mine data from very large data sets several orders of magnitude faster than traditionally and an additional two order of magnitude faster than with conventional electronics. Neural Networks Meet Space relates the extraordinary research done by Yashar Hezaveh, Laurence Perreault Levasseur and Phil Marshall at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford/SLAC, recently published in Nature, in which strong gravitational lenses are analyzed using a convolutional neural network. Gravitational lenses are complex distortions of spacetime—predicted by Einstein—produced by the gravity of foreground massive galaxies or galaxy clusters that affect the path of the light reaching us from background galaxies. These distortions allow astrophysicists to quantify, and develop a history of, the dark matter that makes up 85% of matter in the universe and the dark energy driving the acceleration of its expansion. Traditionally, such analyses were done by comparing computer-intensive mathematical simulation lensing models with actual images, which could take weeks to months to perform. Using a neural network, however, allows the same analysis to be done in seconds, once the network has been “trained” for one day – by presenting about half a million telescope images of gravitational lenses to the system. Remarkably, in addition to being able to automatically identify a strong gravitational lens, the neural network was able to elucidate the property of each lens (mass distribution and magnification of the background object.) As the article explains “Neural networks are inspired by the architecture of the human brain, in which a dense network of neurons quickly processes and analyzes information. In the artificial version, the ‘neurons’ are single computational units that are associated with the pixels of the image being analyzed. The neurons are organized into layers, up to hundreds of layers deep. Each layer searches for features in the image. Once the first layer has found a certain feature, it transmits the information to the next layer, which then searches for another feature within that feature, and so on.” And now, it seems that another advance could make this type of work even more efficient! As related in AIP’s Physics Today, Marin Soljačić, Dirk Englund (both at MIT), and colleagues developed a proof-of-concept photonics circuit to perform the operations underlying neural networks that may offer two orders of magnitude faster operations, as compared to using their traditional electronics counterparts. As Naisbitt said in Megatrend “We are drowning in information but starved for knowledge.” Given the remarkable advances outlined above, we may yet be able to develop solutions towards the ingestion and useful metabolism of the coming deluge. ![]() A distressing dissonance exists between some influential elements of our government and the scientific establishment. Whether evident in the recently proposed presidential budget or in the pronouncements from some members of Congress, one cannot but observe that science is under siege. One may debate the motivations behind this (e.g. national debt concerns, presumptions–well founded or not–in future economic growth, skepticism–whether in good or bad faith–towards well-established climate science, etc.), but it is unreasonable not to understand, appreciate and internalize the benefits of scientific enquiry and of curiosity-driven research. One can argue that our quest for, and promotion of, rational thinking have rarely been more salient. Federally funded organizations that have delivered so many discoveries are now facing dramatic cuts that would hobble our competitiveness, and I dare say, perhaps even our civilization. The FY18 budget sent to Congress would cut federal spending on basic research by 13-17%. The following funding level reductions have been proposed (in addition to a reportedly very low ceiling rate of 10% for permitted indirect costs): National Institute of Health -22% DOE Office of Science -17% (incl. -43% for bio/environment research) NIST -23% (incl. -13% for research) NOAA -16% (incl. -32% in weather & climate research) U.S. Geological Survey -15% (incl. -24% in land resources mission area) National Science Foundation -11% (incl. -14% in education) The budget is partially based on an overly optimistic expectation that economic growth will generate enough revenues to eliminate the US deficit in 10 years. However the research done in our labs is the principal engine of that growth. Too many ignore or misunderstand the benefits of the discoveries basic research has produced. One recalls Michael Faraday’s rejoinder to Sir William Gladstone (British Chancellor of the Exchequer) who, when questioning the value of Faraday’s experiments on electricity, was told: “Why, Sir, there is every possibility that you will soon be able to tax it!” Similarly, most iPhone owners have no concept of the fact that this device would not exist without the fundamental and applied research that produced transistors, integrated circuits, cellular communications, GPS, LEDs, and a host of other technologies we now take for granted. Our economic growth depends on a science and technology pipeline that starts with curiosity-driven research with no immediately discernible applications, followed by development and industry-ready maturation, and ending in entirely new products and services that can rarely be forecast at the outset. Cutting basic research funding will inexorably dry this pipeline up and severely damage future growth. Most of that blue-sky research must be supported by our tax dollars and just as we must press our government to defend this essential endeavor, we must strive to explain these pursuits to all stake-holders–including the voters and the press. ![]() U.S. patent law is addressed in the U.S. Constitution: Article I, Section 8. [link]. In addition, one remembers that the Bayh-Dole Act (1980) gives the freedom to inventors, funded by government research contracts or grants, to exploit their inventions. [link] More recently the federal government has clearly indicated its desire to see lab-to-market commercialization of technologies. See, in particular:
Given the current assault on science research funding, one imagines that these institutions would welcome the ability to create a supplementary income stream to their general fund, as well as rewarding their departments and inventors. Being a close admirer of the SLAC Linear Accelerator Center, a DOE federal lab, I looked in particular at the DOE’s Technology Transfer Execution Plan listed above. I focus on their two outlined objectives:
It is important, in my view that one recognizes that, in addition to the potential financial returns to a general fund, a department lab and the inventor(s), one should also consider the following benefits, particularly given some labs’ ethos to freely share their intellectual capital:
Continued in the next article at www.pierreschwob.com/blog/archives/12-2016 ![]() (Cont'd from benefits-of-technology-transfer.html) Here is an outline of the methodology for the development and functioning of an Office of Technology Transfer at a research university or a lab. Note that the choice of the name “Office of Technology Transfer” (OTT) is deliberate. One could propose to use “Office of Technology Licensing” used in some universities. The fact is that many institutions have an ethos in which researchers believe their work should be freely available to all. As I write in a previous blog, technology transfer has many important benefits other than financial returns. One of the central thrust of the OTT is that it needs to be entrepreneurial, collaborative, respectful of the lab or university’s ethos, and marketing-oriented. This means:
Buy-in from the labs and their staff is critical
Methodology for assessing and developing an invention towards a marketable patent:
A possible breakdown of the proceeds of license agreements:
OTT should consider allowing equity participation instead of cash when dealing with start-ups (where cash is precious). In addition to the internal outreach mentioned above, a concerted external PR effort should prepare the field. It is critical to the success of the operation that the lab’s unique assets be leveraged for strong OTT returns and maximize the other benefits of successful technology transfer activities. A chance encounter billions of years ago led to the explosion of life on Earth. An amoeba-like organism absorbed a bacterium that had harnessed sunlight to separate oxygen from water molecules. The descendants of that ancestor of all plants and trees, transformed our atmosphere, allowing all animals and us to evolve on Earth.
Chance encounters between moving objects (remember the end of the dinosaurs?) or between interacting sentient beings can also have huge consequences. Just as we must avoid disastrous results (check for Earth-crossing asteroids!) we should foster positive outcomes and provide stages where the latter can occur. Many notable advances have been the result of chance encounters, sometimes between experts from different disciplines. Ed Catmull, the computer scientist who heads Pixar and Walt Disney Animation Studios writes that the best ideas emerge when talented people from different disciplines work together. Two Bell Labs radio astronomers, Robert Wilson and Arno Penzias, were racking their brains in 1964 to explain a persistent noise they observed with their radio antenna. A chance meeting with an MIT physicist who mentioned a pre-print authored by three Princeton physicists, led them to understand that they had discovered the Cosmic Microwave Background, a predicted radiation left over from the early universe, only 380,000 years after the Big Bang. This earned them the Nobel Prize in 1978. It was the article of faith that cross-pollination was essential to the furtherance of their objectives that Jonathan Dorfan, as founding director of the Okinawa Institute of Science and Technology, institutionalized this concept through the development of work areas with no boundaries between the various disciplines at the OIST. Indeed all meeting and resting areas were drawn to force experts to mingle. I used to host Friday lunches at the Stanford Faculty Club and made it a habit to invite each week folks from different departments. It was always a joy to hear “Oh, you are working on this? Did you know that…?” Some of these conversations led to active and fruitful cooperation. Since most universities and labs cannot transform their existing physical layouts, they should make all efforts to promote exchanges across their silos in other ways. This can be as simple as running a weekly random drawing and invite those selected to share a meal. In my experience, 6 to 8 participants is an ideal number. This allows all to participate in one conversation while still permitting one-to-one exchanges. If you know of an important advance resulting from a serendipitous meeting, please share a comment. “Did you ever observe to whom the accidents happen? Chance favors only the prepared mind.” -- Louis Pasteur Our civilization progresses through intellectual and technological revolutions (often called paradigm shifts.) We began with the invention of language, tools, agriculture, and writing. We drove through the age of exploration, the invention of the printing press, the Renaissance, Enlightenment, and the Industrial Revolution. We are now fully engaged in the Information Age with broad access to computers, lightning-fast communications, intelligent software, and soon AI.
However the promises of the Digital Age call for another paradigm shift—a phase transition (to borrow from thermodynamics)—to solve the issue famously evoked by Naisbitt in Megatrends: “We are drowning in information but starved for knowledge.” We believe that context is essential in transforming information into knowledge, and we are laying out the groundwork for an ambitious project to benefit everyone: a platform upon which a contextual reference tool will be built to usher the Information Age into the Knowledge Age. This platform is the Open Ontology Project. The Ontology is a scalable, peer-reviewed undertaking to develop a massive hierarchical organization (ontology) of human knowledge. The Ontology is an essential tool in its own right. Such an organization of knowledge helps frame a subject matter within the domain or domains it belongs to. This helps students understand the “belong to” relationships between various concepts they are exposed to in the classroom. The Ontology creates a framework, or mental matrix, to help metabolize information into knowledge. The Ontology has the ability to provide lists or collections that are otherwise difficult to find in other places. Whether you are interested in historical events, wines, dogs, galaxies, plumbing or opera, the Ontology provides an easily navigable landscape to provide a coherent index of all that we know. It is to be mined by anyone for unlimited applications. In the immediate, the Ontology can provide an AI-machine learning-based Recommendation Engine for scholars and researchers to provide links to the peer-reviewed literature. Conversely, it can be used by editors as a specialist and expert-reviewer recommendation engine. More generally, it will offer a new methodical system to explore the web, to learn, to shop, to work, to play, and myriad other applications others may think of and develop on top of the Ontology. The Ontology is different from Wikipedia in that the Ontology is fundamentally based on relations. With its top-down organization, it allows an intuitive navigation between related concepts. It is also text-sparse: Ontology descriptions are limited to short, easily digestible 150-word introductions, augmented by links to external references. As such the Ontology can be viewed as a reasoned index to references such as Wikipedia and online education assets such as Khan Academy, Coursera, edX, etc. |