People who practice collaborative innovation at times seek out of the box ideas for a given challenge. In this article, innovation architect Doug Collins applies work from Nobel Laureate Daniel Kahneman by way of offering insights on selecting crowds that can achieve novelty.

Working with clients for the full length of a collaborative innovation campaign satisfies me. We get to connect the dots.

Did the campaign team frame the critical question that resonated with the community?

Did the community, in turn, contribute and develop compelling ideas that addressed the question?

In the Box

Campaign teams sometimes express regret at this later stage. Specifically, they observe that the community failed to contribute out of the box ideas. They assess ideas that the community advanced through the later stages of crowdsourcing with “yes, but” commentary. That is, “yes, person X contributed a good idea Y, but we have already thought of Y.” The collaborative innovation campaign serves as an exercise in validating known approaches to a problem, as opposed to identifying novel solutions.

The client and I reflect on the extent to which the campaign team associates compelling with novelty.

Running a campaign to vet solutions to a challenge couched in a predictable environment can bring value (e.g., How might we improve this process?). The campaign team increases the sample size of the number of people who weigh the question. Doing so increases the likelihood that the solution chosen will, in fact, work.

What if the campaign team seeks the novel approach for navigating an unpredictable environment, however (e.g., How might we enter a new market?)? What can they do?

Behavioral economist Dr. Daniel Kahneman offers insights on how we make decisions. His work can benefit campaign teams who seek to maximize the probability that the community develops valid, novel solutions. He shares his findings in his book, Thinking, Fast and Slow.

The Anchoring Effect and the Illusion of Validity

Dr. Kahneman asked one set of visitors to the San Francisco Exploratorium two questions…

  • Is the height of the tallest redwood more or less than 1,200 feet?
  • What is your best guess about the height of the tallest redwood?

He asked a second set a variation…

  • Is the height of the tallest redwood more or less than 180 feet?
  • What is your best guess about the height of the tallest redwood?

The high anchor of 1,200 feet exceeded the low anchor of 180 feet by 1,020 feet. People who received the first set of questions estimated the height of the tallest redwood at 844 feet, on average. People who received the second set estimated the height at 282 feet, on average.

The results of this experiment—and many variations on it—prove that random anchors can influence one’s perspective. The initial questions that referenced 1,200 and 180 feet did not claim that those values related to the heights of the tallest redwood trees. However, in the absence of other, valid information, we automatically use these random numbers as points of reference—as anchors—for making our own estimates. Further, Dr. Kahneman observes that we do so effortlessly, without thinking, in most cases.

In a second study, which relates to what Dr. Kahneman calls the illusion of validity, he analyzed the results of 25 people who worked in the financial services industry, picking stocks for clients. The results covered eight years. The firm awarded bonuses based on how well the stocks that each person picked performed over the year. Dr. Kahneman did a basic pairwise comparison, using each of the years as the base year, to derive a total of 28 correlation coefficients (i.e., Year 1 relative to Year 2, Year 1 relative to Year 3, etc.).

Dr. Kahneman found that the average of the 28 correlations was .01. That is, he found no evidence that skill played any role in each person’s ability to pick stocks for their client’s portfolios. Interestingly, the principles of the firm had no trouble reconciling Dr. Kahneman’s findings with their pay practices.

Dr. Kahneman’s observes that the illusion of skill, particularly when viewed in the context of performing higher level tasks such as interpreting economic data in an attempt to divine the future, can easily trump the question of validity (i.e., in this case, knowing how to interpret economic data does not relate to the quality of one’s stock picks). Dr. Kahneman notes that, in this case, the one skill required to positively affect performance—the valid indicator—would be the ability to know whether the market had accurately priced the stock in question. The brokers and the firm’s principles, lacking the ability to divine this information, knowingly chose to rely on an invalid substitute for measuring performance. Dr. Kahneman observes that this behavior occurs often. Political commentators offer notoriously inaccurate predictions of elections, for example.

Implications for Crowdsourcing Innovation

Dr. Kahneman describes more scenarios where we, as humans, remain ignorant of the traps that hamper our ability to make good decisions. The scenarios of anchoring and the illusion of validity, in particular, relate most directly to a campaign team’s desire for truly novel ideas.

With respect to anchoring, I often witness more senior members of the community—often the campaign team members—inadvertently anchoring the dialogue around ideas with statements such as, “We have tried idea X before,” or “We have idea X on our roadmap.”

These statements, per Dr. Kahneman’s findings with the professional stock pickers, may enjoy no greater validity than the comments from other community members. However, because the community perceives that they exercise higher level skills (e.g., building a product roadmap), they assign greater weight to their comments. Their comments serve as a random anchor that constrains the community members’ thinking. They establish the redwood tree at 180 feet.

The presence of perceived experts making invalid comments may have the effect of curtailing innovation that further exploration would prove both compelling and valid, if given the opportunity to incubate.

Dr. Kahneman observes that expert intuition more often reflects valid perspective when the following conditions exist:

  • The environment is sufficiently regular to be predictable (e.g., the game of chess and the ability of grand masters to consistently beat less experienced opponents based on their having experienced thousands more outcomes, each realized under the same rules)
  • An opportunity to learn these regularities through prolonged practice (i.e., to play the game repeatedly to learn the consequences of specific moves and strategies).

By contrast, campaign teams frequently launch collaborative innovation campaigns precisely because they value the larger community’s perspective on highly unpredictable outcomes (e.g., What product should we build for a new market segment?).

Campaign teams may want to consider two courses of action, given Dr. Kahneman’s findings.

  • When the campaign team desires novel solutions, they should seek a random population from a set of people who have no firsthand knowledge or experience with the topic. Doing so reduces the probability that random anchors masquerading as conventional wisdom skews the dialogue.

For example, do not invite the product managers to participate in a campaign when the topic relates to the future direction for their offering.

  • Rely on a simple set of heuristics when evaluating ideas, as opposed to relying on the intuition of experts whose perspective may prove wholly invalid in highly unpredictable environments.

Everett Rogers, for example, identified the basic factor that predicts the likelihood an idea will diffuse: the innovation materially improves upon the current method.

Practically speaking, the big impediment to applying lessons from Dr. Kahneman’s work is overcoming the reluctance on the part of the organization’s perceived experts — ironically, the same group of people who seek out-of-the-box ideas from the campaign. Nobody wants their years of education and experience equated to a roll of the dice.

Green field ideas require green people. It’s not enough to direct a collaborative innovation community to think outside the box. Instead, the campaign team must design the community outside the box from the start.

People who commit to the practice of collaborative innovation for the long term may want to perform a study of their own, in which they pose the same challenge question to two populations: the first with firsthand knowledge of the subject; the second, with none. To what extent does the latter group offer more novel ideas?

The following figure summarizes the discussion.

Figure 1: mapping crowd demographic to the environment for the enquiry

Click to enlarge

About the Author:

Doug Collins serves as an innovation architect. He has served in a variety of roles in helping organizations navigate the fuzzy front end of innovation by creating forums, venues, and approaches where the group can convene to explore the critical question. He today works at Spigit, Inc., where he consults with Fortune 1000 clients on realizing their vision for achieving leadership in innovation by applying social media and ideation markets in blended virtual and in-person communities. Previously, Doug formed and led a variety of front end initiatives, including executive advisory programs for industry influencers, early adopter programs for lead users, corporate strategic planning, and structured explorations of new market and product opportunities. Before joining Spigit, Doug worked at Harris Corporation and at Structural Dynamics Research Corporation which is now part of Siemens Corporation.