On the risk of becoming stupid

In the 1980s, I used the Defense Advanced Research Projects Agency Network, or Darpanet, to collaborate with colleagues worldwide. Web browsers did not yet exist, but a text-based program called Lynx allowed us to use the Web programming  language HTML to share and format our work in progress.

Around 1990, the Darpanet became the Internet. Soon Netscape appeared with their Navigator browser, advertising “the Web for everyone.” Inspired, I purchased the domain name woody.com and created Woody’s Agora. It was a blog before blogs existed.

One of my old musings was on software agents — the forerunners of the algorithmic approaches Amazon.com, Netflix and others use to offer advice on what you may want to buy, rent or know next. Then, as now, I had a low opinion of computer suggestions.

While the connection with this column’s theme of “technology and society” is obvious, what is the connection with “risk”? Simple. Allowing software to direct our interests increases, by orders of magnitude, the risk of becoming stupid.

Below is a lightly edited version of the piece I posted on that early blog in May 1995. I titled the entry:

On the Use of Agents Considered Harmful

Like many traveling businesspeople, I often find myself on the Hertz bus returning to an airport on Friday afternoons. During the short trip, with my eyelids beginning to droop, I love to eavesdrop on others’ week-ending philosophical insights. The following event took place on the way to Chicago’s O’Hare International Airport.

Two 30ish guys were returning to the airport together; one worked at the home office of a high-tech company, call him Homes. The other worked in the field, call him Fields.

Fields says to Homes, “Does your staff get such-and-such magazine anymore?” Homes says, “Nope. We don’t get any magazines or journals for the staff. We found that they spend too much time reading.

“In fact,” Homes goes on, “we don’t have subscriptions to any magazines or journals. We subscribe to a service. This service asks us what types of articles are of interest to us, which we specify as rules, like a Boolean or keyword query. The service then creates ‘agents'” which apply these rules to magazines and journals, electronically ‘clips’ articles and delivers them to us on our network as files! That way people don’t waste time.”

And they don’t learn anything either, I thought.

In a way, this is another version of “garbage in, garbage out.” If the only things worth knowing are the broad categories you can specify logically, then the largest chunk of knowledge, the things you don’t yet know, will be hidden from you forever: “stuff you know in, stuff you know out.”

In a deeper sense, however, this vignette points to the weakest and most dangerous part of computers: They do what you say, not what you mean. Computers can only carry out a sequence of unambiguously specified logical instructions. We cannot even guarantee, in general, that this sequence will not go into an infinite loop (see Alan Turing’s halting problem).

It is impossible for me to specify logically all of the things of interest to me in a magazine. No computer can scan a magazine and free-associate. In fact, the act of browsing is its own reward: a new product announcement here, an ad with a great layout that helps to design an input screen there, a subject that never before generated interest catches my imagination.

Imagination! I guess that’s the real problem. Logic never made innovation. Insight, serendipity and fortuitousness are the paths of innovation. Letting logical “agents” working in cyberspace determine what will be of interest to you will cut you off from the future. The richness of information on a page of Wired magazine, or even Computerworld, cannot be logically specified for retrieval.

Cyberspace is a black box that can only be penetrated by logical query. The field of display, a 14-inch to 20-inch screen, is just not big enough to present things easily. Too many thoughts and physical manipulations are necessary to find information, let alone compare several pieces of information at the same time. Those who don’t learn the limitations of cyberspace are doomed to live in it.

Soon, Fields and Homes’ conversation drifts to the financial woes of their company. It seems that the stock price has dropped below a mythical lower barrier, and that their founders’ stock is next to worthless. Maybe they missed what was coming next.

The view in 2014

Today, computer suggestions (Where do my friends eat? What movies do they like? Where should I vacation next?) isolate us from the real world and plunge us into the virtual realm, where social contact is made through a smartphone. Nothing is sadder than watching a couple at a restaurant busily texting away. Are they texting each other? Will we have a society in the future, as in Isaac Asimov’s “I, Robot,” series, where human contact is via telepresence and physical love is with robots?

Fitting an Elephant in the 3.11 Tsunami

Dave Tappin, a professor of marine geology at the British Geological Survey, says a 'submarine slump' added to the force of the 2011 tsunami.
Dave Tappin, a professor of marine geology at the British Geological Survey, says a ‘submarine slump’ added to the force of the 2011 tsunami.
My friend and colleague, the brilliant Dave Tappin, has great evidence that the height of the 3.11 tsunami was significantly augmented by an undersea slump, or submarine landslide. He has voiced this opinion for some time.

Let this put on notice those who rely on only tweaking the parameters of simulations to arrive at the results actually measured. The famous mathematician and physicist John von Neumann remarked that with four parameters he could “fit an elephant…” (Dyson, 2004). Nothing beats actual data pointed to by simulation weaknesses.

Moreover, it shows that any argument from historical data, especially the kind of Bayesian analysis I do, must be divided into tsunami generated by earthquake and tsunami generated by both earthquake and submarine landslide.

Read the full article at the Japan Times: Additional source for 2011 tsunami identified

“What Went Right” by Airi Ryu

What went rightA great article by Airi Ryu, a student from the University of Southern California, appeared today, March 15, in the Japan Times. A longer article was also published in the Bulletin of Atomic Scientists on March 10. And both articles come from a longer paper that Ryu-san revised and published on February 26.

All three can be found here:

Ryu-san has recognized the hard work and constant effort that Tohoku EPCo devotes to safety. After so much has been published about what went wrong on 3.11, it is nice to see someone publishing about what went right.

Congratulations, Ryu-san, for a most excellent article from the heart.

Rebuilding The World

The road forward - Cover of the Nikkei Asian ReviewThe nice and neat probabilities of a bell curve may work when we discuss height and weight, because the averages of human norms are well measured and accounted for. On the other hand, we do not yet know enough of the variables that go into creating a natural disaster to be able to define the costs of repairs. Before Hurricane Katrina, in 2005, Hurricane Andrew was the most costly hurricane in the U.S. Using 2011 values for dollars, Hurricane Andrew cost $41.5 billion in damage, and Katrina cost $91 billion. There isn’t a simple way to determine these values because of the variables, and instead we have to rely on the art of resilience; the art of putting everything back together again.

What do we do when these variables we rely on don’t play well together?

Read the full article in the Nikkei Asian Review: Probabilities and possibilities or download the pdf here. Also read Bob Geller’s article Back to the future: Restarting Japan’s nuclear power plants.

Probabilities and possibilities

We have more than our share of extreme natural disasters in Asia. Earthquakes and tsunamis, volcanic eruptions and typhoons. The severity and costs are impossible to predict. Drawing up effective emergency plans is a Sisyphean task.

How should we approach this very unapproachable challenge? When disaster strikes, how can respond?

The first step may be to change the way we think about probabilities.

Watch the tail

Suppose the tallest person in a room is 2 meters. What is the probability that a person who is 4 meters tall — twice the height — will walk in? Surely the odds are extremely low.

Height is distributed among individuals in “thin tailed” fashion. On a bell curve, you will have thin tails of extreme measurements on each end, with the larger mean in the middle. The likelihood of the extreme values is very low. Put another way, the values act nicely.

Hurricanes and other natural disasters are a different story.

Prior to 2005, the most costly hurricane in the U.S. had been Hurricane Andrew, which struck in 1992. In 2011 dollars, the storm caused $41.5 billion in damage. But then came Hurricane Katrina in August 2005. This storm wreaked $91 billion worth of damage, adjusted to 2011 — twice the cost.

As you can see, with some disasters, extreme values do not behave nicely at all. Hurricane damage is “fat-tailed” — the tails at each end of the curve do not approach small values. The extremes are more extreme.

The events that occur in the fat tails are the “black swans” popularized by author Nassim Taleb — extreme, but not improbable, events that have major consequences.

The point? The lessons we can take from historical data, and the way we should think about the future, depend on whether we are dealing with thin- or fat-tailed events. With fat-tailed events, we must expect the next worst case to be far worse than the hitherto worst case.

But why are natural disasters fat-tailed in the first place? The answer lies in the “geometry of nature.” A mathematician named Benoit Mandelbrot explained this in 1963: Nature’s geometry is fractal, not Euclidean. Clouds are not spheres; mountains are not cones; rivers do not travel in straight lines.

Extreme natural disasters are unique because the geometry of nature is extreme. We can only approximate the jaggedness of coastlines by drawing zigzagging lines. The fractal geometry of seismic faults is central to the magnitude of earthquakes.

On March 11, 2011, Japan experienced the tragic implications of fractal geometry. So did Sri Lanka, Thailand and other Asian countries on Dec. 26, 2004, when a massive quake in the Indian Ocean sent a towering tsunami their way.

If we accept that we are at the mercy of fractal geometry, what are we supposed to do?

For many of us, our day-to-day lives seem fairly steady. Then a massive earthquake hits or a monstrous typhoon comes, and everything changes. All of a sudden, we have no idea what to do, where to go, who can help. This is where resilience comes in.

Resilience has become a cliche, but it does have meaning. When the world falls apart, when nothing seems to make sense, resilience is the art of putting the pieces back together. It is about rebuilding the world, though it will not be the same world as it was before.

Here are some key traits of a resilient response:
Bell Curve example of Fat Tails and Resiliance

  1. Drawing on experience
  2. Questioning that experience
  3. Intuition
  4. Improvisation, or making the most of materials at hand
  5. Listening and speaking
  6. Examining preconceptions
  7. Ignorance + knowledge = wisdom
  8. Recognizing and taking advantage of luck

People who exhibit the above traits are quite different from those who work according to strict protocols, procedures and rules. After a catastrophe, we need an A-team with those qualities to complement traditional emergency crews. Call them resilient responders.

Of course, if one group improvises while the other goes by the book, there could be clashes. Would these two cultures be able to coexist and work in harmony after a calamity? I do not know. What I do know is that both modes of thinking are essential for survival.