By Mike Koetting January 17, 2019
A few months back, while reading the book Fly Girls, about early women aviators, I was struck by how insanely dangerous early airplanes were. They fell out of the sky quite regularly and a remarkable percentage of the early aviators died in crashes. It got me wondering, anachronistically, how the development of airplanes would happen in today’s more risk-averse world. I found that same sentiment in a complaint voiced by a Silicon Valley developer that achieving self-driving cars was being impeded by the unwillingness of society to tolerate the trial and error necessary to make autonomous vehicles a functioning reality. This is a fair comment, although it doesn’t address that the early fliers almost exclusively killed themselves; when autonomous vehicles run amok, it is unsuspecting bystanders who bear the brunt. Nevertheless, this raises the broader issue of how much risk (and for whom) is society willing to incur for technological progress.
My last post recalled an observation by Donald Michael (The Unprepared Society, 1968) that technology had increased the interconnectivity of people in hitherto unimaginable ways. His specific example was that 150 years ago there could be no black outs in major cities because someone would have had to collect all the kerosene lanterns. He had a second, related, observation in the same book that has also stuck with me over the years. He pointed out that in statistical terms, the average outcome of any uncertain event is determined by multiplying the probability of a thing happening by the magnitude of the consequence. Thus, he asserted, if an event had a large enough consequence—say the ocean rising 100 feet—even if the probability was very small, the expected value could be large because the consequence was huge.
When you combine increasing density and interconnectedness, this question of risk from new technology becomes critical. In additional to the environment, where the risk is already off the chart, two areas that trouble me are genetics and Artificial Intelligence (AI). To be clear: both have tremendous promise to do good and I am suspect of knee-jerk opposition to advances in those areas. (For instance, I think much of the discussion about Genetically Modified Food is simply hysteria. We’ve been playing around with plant breeding since the beginning of time. This simply speeds up the process.)
Still, we would be kidding ourselves if we didn’t acknowledge that the risks involved in these endeavors include consequences that are unfathomable.

Experimenting with genetics may, in some respects, be less immediately dangerous because so much of it happens in academic laboratories with relatively strict protocols and reasonable communal ethics around risk mitigation. But the truth is that only the tiniest portion of the population has considered in any thoughtful way where the science is taking us.
In the recent years, scientists have developed the ability to mess around with genetic structures in ways that were the stuff of science fiction as recently as a decade or so ago. An NIH primer on gene-editing says:
Most of the changes introduced with genome editing are limited to somatic cells, which are cells other than egg and sperm cells. These changes affect only certain tissues and are not passed from one generation to the next. However, changes made to genes in egg or sperm cells (germline cells) or in the genes of an embryo could be passed to future generations. Germline cell and embryo genome editing bring up a number of ethical challenges.
That puts it modestly. We have no idea what could happen if we start changing germline cells. A fascinating article in The Harvard Magazine reviewed the issue of whether we should try to use gene-editing to eliminate malaria. All the scientists interviewed underlined what a scourge malaria was, particularly in the under-developed world. And they all exuded relatively high degrees of confidence that science was at (or very close to) a point where a major attempt could be mounted. But they all also admitted a large degree of uncertainty about whether they should. Among other things, they conceded that even if successful, there might be unforeseen consequences. For instance, we don’t know what other species might fill that biological niche. Additionally, there was the possibility of failure—or, more likely, only partial success—which would raise a different set of concerns. Consequently, all the scientists in this article emphasized the need for world-wide public engagement. One of them said:
There’s tremendous humanitarian need for a lot of these applications…but the limiting factor may not be the time required for us to build a [genetic modification] in the laboratory. It may be the time required for society to decide whether or not it should be used.
While these particular scientists seemed relatively sanguine with the possibility that society could somehow say “No—this line of inquiry is simply too dangerous,” we should all be thinking about whether saying no is even possible. In the last decade the ability to do gene-editing has become relatively accessible in scientific settings, creating the real possibility somebody will try to use it in a not well controlled way.
The problems with AI are of a different sort. Less likely to create a species wide catastrophe, but certainly big enough to be enormously disruptive to society. There are very specific problems, such as the ability of AI to create so called “deep fakes” that make it almost impossible to tell if a video is real or edited or the ability to use facial recognition for truly big brother kinds of control, as China is actively doing. We simply have no social mechanisms to deal with these. Worse yet, there is a small army of entrepreneurs churning out ideas and applications at a furious pace. The only appropriate image is the Sorcerer’s Apprentice.
There is also the incorporation of big data and AI algorithms developing rules hidden in all aspects of life. Think of the slightly annoying way that WORD can “spell-check” your sentence into nonsense. Now imagine what happens if rules like those are put on steroids and applied in every aspect of life. Cathy O’Neil, a mathematician, has written a cleverly titled book, Weapons of Math Destruction, that shows these application of data are creeping into virtually all aspects of life. And there are a plethora of suggestions to wire them further into our day to day life. O’Neil sees this as severely problematic because they are opaque (the people using them often do not know how they actually work); they are invisible (you often don’t know when they are being applied to you); they are unregulated (it’s not even clear how they could be regulated); and they have a deep potential to not only reinforce, but to accelerate, the status quo because they don’t ask what should be the case but only what does the current data predict
Hany Farid, a Dartmouth College computer science professor, worries about the impact of letting Silicon Valley launch one thing or another with little sense of the consequence:
If a biologist said, ‘Here’s a really cool virus; let’s see what happens when the public gets their hands on it,’ that would not be acceptable. And yet it’s what Silicon Valley does all the time….We have to understand the harm and slow down on how we deploy technology like this.
The idea of slowing down technology is not part of the Western psyche under any circumstance. The problem is made worse by unwillingness to regulate capital and the inability of government to concentrate on really important issues. Congress has shown itself incapable of understanding, yet alone regulating, computer technology. (Part of this problems stems from Congress eliminating its Office of Technology Assessment under Newt Gingrich—leaving it without appropriate staff on these critical issues. Congress is apparently not interested in addressing even this minimal fix.)

In a sane world, governments and other entities would be making the risks inherent in this technological progress front and center. The issues that are being raised are not marginal issues. Unless we find a way to rationalize the cancer-like growth of technology, we are likely to find our species is too smart by half.