Chapter 6: Collective Intelligence in Action
The author mentions that the British government once offered commissions for the solution of problems. That is, instead of setting up a government agency or providing grants for academics to do research that would produce no solution, the government would offer a bounty for anyone who could provide a solution to a problem.
For example, there was a "longitude prize" of 20,000 pounds (roughly $12 million today) offered by the British government in the eighteenth century to anyone who could provide a way to accurately determine the longitude of a sailing vessel. This was, essentially, crowdsourcing in that the government broadcast the solution to the largest possible number of people and hoped that someone could provide an answer to the problem their experts couldn't solve.
(EN: This is rather interesting, as the current practice seems to be doling out grants and contracts to those who do research, sometimes for decades, without providing an actual solution to the problem.)
Outsourcing Research and Development
The old paradigm of R&D was a clandestine department where a few "big brains" would attempt to invent or improve products. This reflects a mindset that value is created by the few and presented for consumption by the many, which worked well in a market where demand far exceeded supply. In a competitive environment where suppliers compete for customers, the customer is in an advantaged position: he will buy what he wants and doesn't have to make do with what is available from a limited number of suppliers. Hence, the focus of current product development is in discovering what customers want, which can best be determined by involving them in the development process.
For some companies, making that transition is painful. For decades, companies were set up to control information, and protect their "trade secrets" from being divulged to outsiders in hopes of gaining a first-mover advantage and maintaining their lead by keeping competitors in the dark. But the world has changed: no secret can be kept for long - and in order to move quickly, you must have transparency. Competitive advantage requires being in touch with the multitudes, not aloof from it.
Even when it comes to scientific and technical problems, the kind that seem to require deep and specialized knowledge, outsourcing is beneficial. There are many individuals who have specialized knowledge and can contribute meaningful solutions to highly technical puzzles.
The financial crisis has been another driver of crowdsourced R&D. When sales slump and budgets tighten, the R&D departments are among the first to be cut back. Companies that are struggling to stay in the black right now often see investing in the future as frivolous. So companies that were traditionally very closed-off have become more open to getting free information from outsiders.
A few rather elaborate case-studies follow: a highly qualified professional who was dissatisfied by the restrictions of being a "corporate scientist," others who find their daily duties to be rather dull and are attracted to opportunities to solve interesting problems, some who are unable to find employment in their chosen field and see contests/bounties as a way to get attention and demonstrate their abilities, etc. The point is that "the crowd" includes many individuals with solid credentials.
Interesting analogy: distributed computing uses the excess capacity of thousands of computers to perform a task. Crowdsourcing likewise taps the excess capacity of thousands of brains.
He also notes proposals from both sides of the political fence, from politicians who propose offering prizes or bounties for solutions rather than giving grants for research that does not lead to a solution.
The predictions of experts can often be very wrong, and the predictions of polls are seldom much better. He suggests that polls are generally not very good instruments because the people who answer polls know that it is not real, and they have nothing at stake. Polls can also be abused - for example, exit polls are believed to be influential with voters who have not cast their ballots (those who have not voted yet are more likely to vote for a candidate who seems to be in the lead, particularly if they have no strong preference in a given race), so their legitimacy is often questionable.
Online polls are not particularly better. More people are involved, which in theory creates a lower probability of error - but even so, people are often speculating and posturing when they answer polls about what they might do in future. Those who conduct market research are well aware that people will claim that they would gladly buy a product at a given price, but their real behavior in the market is very different from what they claim when speaking to researchers (nobody wants to seem cheap or poor when speaking to another person, and will routinely exaggerate the amount they would pay).
He speaks of a model in which participants were invited to play in a game, a kind of political stock market in which players started with a budget and invested in politicians - the price was tied to their approval ratings. Players would profit by paying 45 cents per share for a politician whom the polls suggested would get 45% of the vote, then sell then when the polls indicated 55% (or hold them until the outcome of the election was known). The author suggested that this method had a very low margin of error (0.1%, as opposed to 2.5% for exit polls).
His explanation is that the game scenario encouraged more astute behavior than polling because users felt there was something at stake (their ability to win a game). It also allowed the voting to be weighted - the more strongly a player believed in a politician, the more shares they could buy.
There's a mention of a firm that had serious problems with its central planners' predictions of the number of units that would sell - getting this right was important to optimize production so that they would produce enough inventory to meet demand but not so much as to have unsold inventory. The predictions were greatly improved when the company asked its sales force, who were more numerous and more in touch with the customers.
While prediction markets can be more accurate than experts and polling, they are far from perfect. They have shown a tendency to suffer from fads, information cascades, and runs. Consider that stock markets are, in fact, production markets with participants risking their own fortunes on guessing which companies will prosper - and there is very little correlation between a stock's price and the company's actual performance. The same inaccuracy is true of derivatives (stock options, which bet on a future sales price) and foreign exchange (betting on the future value of currencies).
The author mentions that there has been some difficulty in making more widespread use of prediction markets. The stock market aside, most state governments regard risking money on the probability of future events to be a form of gambling. There is also some media uproar over the prediction of unfortunate events - an experiment undertaken to predict terrorist attacks was quickly shut down when the media reported that people were making money off of tragedy.
Case Study: Marketocracy
There is a case-study of a company that sponsored an online site for fantasy stock trading, giving individuals a "fake" million dollars to invest and see how they perform. About 50,000 people signed on to be paper traders. The operators began to harvest and analyze the data from the fake trades to identify potential real investments.
Their system does not include all traders in their analysis - they focus on identifying the top individuals, "Marketocracy Masters 100," whose trading activity outperformed the S&P by an average of about 40%. The group has often identified "diamonds in the rough" - companies that were ignored by many professional securities analysts that performed very well. The analysis also led to the creation of a fund that followed the investments of top performers, and it has returned an average of 56% in its first five years.
They also found that constantly monitoring the performance of the players in the game enabled them to adjust their portfolio strategy to market conditions. When the market began to fail, the aggressive investors' performance began to lag behind that of conservative investors. Merely by following the moves of the top 100 at any given time would result in a shifting portfolio that adjusted to conditions. "The reason diversity trumps ability ... is because you can always throw the idiots off the bus."
They do concede that the fund did poorly during choppy markets - when there is a great deal of variability and the losers and winners were not always in the same sectors. When this occurred in 2004, the fund lost fully half its investment capital. There's some indication that they are working on defining the algorithm, and maybe extending the number of traders whom they track closely to even out the variances.