Management guru Peter Drucker noted that “making good decisions is a critical skill at all levels.” In mid April 1912, a decision was made based on an observation which proved to be fatally flawed. For ships sailing the North Atlantic routes at night in conditions where icebergs could be expected it was common practice to detect presence of an iceberg from the white foam splashing against the base of its dark bulk so that a decision could be made to steer the ship port or starboard to avoid a collision. Under the prevailing conditions of an ocean smooth as glass the lookouts on the Titanic saw no such indicator, and the rest is history. A more mundane example of an indicator is when we look for the presence of dark clouds in the sky to see if it’s going to rain and whether to make the decision to take a umbrella when we venture out. As we know from experience it may not rain; this indicator is unreliable or “fallible.”
We constantly, and unconsciously, make decisions based on multiple fallible indicators. Ideally, indicators should be clearly defined, reproducible, understandable, and unambiguous. As we have just seen, these features are not always possible.
Indicators are the critical step between identifying a problem (discussed in Part 1 of this blog) and determining a possible practical solution in the appropriate problem context. Not recognizing this frequently leads to ‘jump to’ solutions also described in Part 1 of this blog.
As Kenneth R. Hammond points out in his fascinating 2007 book Beyond Rationality: The Search for Wisdom in a Troubled Time:
“Because indicators vary considerably in their fallibility, from complete fallibility to perfect infallibility, whether the fallibility is due to random or deliberate factors, it is essential that we be able to measure their degree of fallibility so that we can discriminate among them. These measures simply indicate, in one form or another, how often an observable indicator is associated with an observable fact, condition, or event.”
In selecting a possible policy solution to a technology commercialization problem we may use multiple indicators to select one or more possible solutions. These might be (1) off-the-shelf solutions, modified according to the problem’s context, (2) re packaged existing solutions, or (3) new solutions formed from theory or practice.
In many technology commercialization applications we also wish to know how well solutions will scale up for widespread applications. There is a paradox in how we approach scaling of innovation. In theory we test an innovation in order to determine whether it works and has potential for scaling up, but in practice the decision to move toward scaling up must often be made on the basis of inadequate information, producing fallible indicators, or indicators of unknown reliability, and also before all contextual conditions are in place (context was discussed in Part 1 of this blog).
Next month in the final part of this blog we will finally reach the promised investigation – after this necessary detour – of how problem solving in less structured Rainforest ecosystems may differ from problem solving in more structured environments. What Rainforest elements impact on problem solving? Is identifying problems and possible solutions easier or harder in the Rainforest?
In the meantime let’s wax philosophical and state some hypotheses to chew over and test next time. “Wait” I hear you say “what’s philosophy got to do with technology commercialization?” I hope to demonstrate that the answer is “a lot.” These hypotheses are H1 to H5 namely:
H1. The fallibility of multiple fallible indicators is reduced in spaces where the characteristics are a balance of strong and weak links, with a sufficient number of weak links for stability but still enable access to divergent opinions and experiences (we will discuss weak and strong links in a future Blog).
H2. The fallibility of multiple fallible indicators is reduced in spaces where the characteristics are low transaction costs and high trust levels.
H3. The fallibility of multiple fallible indicators is reduced in spaces where the characteristics are efficient boundary spanning organizations.
Note what connects H2 and H3 is not just reduce transaction costs but transaction value (see The Rainforest book for a discussion)
H4. The fallibility of multiple fallible indicators is reduced in spaces where the characteristics are ordered domain focused on efficiency (such as Plantations), where the whole is the sum of the parts, and where optimizing the parts optimizes the whole.
H5. The fallibility of multiple fallible indicators is increased in complex spaces (such as Rainforests) where the characteristics are such that small actions may change the nature of the system. As a result to optimize the whole system, sub-optimal behavior of each of the components needs to be allowed. See my February Blog Imperfect Works.
Innovation thrives when successful entrepreneurs stay involved and give back.
We’re honored to have Steve Case join us for this Q&A. Steve embodies the generous “pay it forward” philosophy at the foundation of healthy innovation ecosystems, what we call Rainforests.
Despite a highly successful business career—as co-founder and CEO/Chairman of American Online, and later Chairman of AOL-Time Warner—Steve has not rested on his laurels. In recent years, he has taken on a visible role as a supporter and mentor of new startups through his investment firm Revolution, LLC, where he serves as Chairman and CEO, and the nonprofitStartup America Partnership, where he serves as Chairman.
You can read the rest of the article here: http://www.forbes.com/sites/victorhwang/2013/04/03/qa-with-steve-case-entrepreneurs-deserve-our-praise-and-support/