Earlier today I was asked to speak on the business of shared space and the political effects of social media in Northern Ireland at an event in Liverpool in the new year. Not long afterwards I came across this paper which was written in 2007 by Edward Glaeser and Cass Sunstein.
In the wake of the largest and in many ways most successful social media campaign we’ve ever seen in Scotland I think it explains some of the unmet expectations involved:
The information of the crowd provides new data, which should lead people to be more confident and more extreme in their views. Because group members are listening to one another, it is no puzzle that their post-deliberation opinions are more extreme than their pre-deliberation opinions.
The phenomenon of group polarization, on its own, does not imply that crowds are anything but wise; if individual deliberators tend to believe that the earth is round rather than flat, nothing is amiss if deliberation leads them to be firmer and more confident in that belief.
Curating cognitive diversity within groups may provide a useful counterbalance. Dr Scott E Page of the University of Michigan rather sparely puts it ‘Crowd error’ = ‘Average error’ – ‘Diversity’. As Nick Cohen recently argued in the Observer, the serious lack of diversity within modern (often, but by no means exclusively governmental) institutions has driven out difference and/or dissent creating what Chris Dillow calls “Bubblethink“.
Glaeser and Sunstein again…
… we suggest that social learning is often best characterized by what we call Credulous Bayesianism. Unlike perfect Bayesians, Credulous Bayesians treat offered opinions as unbiased and independent and fail to adjust for the information sources and incentives of the opinions that they hear. There are four problems here.
First, Credulous Bayesians will not adequately correct for the common sources of their neighbors’ opinions, even though common sources ensure that those opinions add little new information.
Second, Credulous Bayesians will not adequately correct for the fact that their correspondents may not be a random sample of the population as a whole, even though a non-random sample may have significant biases.
Third, Credulous Bayesians will not adequately correct for any tendency that individuals might have to skew their statements towards an expected social norm, even though peer pressure might be affecting public statements of view.
Fourth, Credulous Bayesians will not fully compensate for the incentives that will cause some speakers to mislead, even though some speakers will offer biased statements in order to persuade people to engage in action that promotes the speakers’ interests.
Our chief goal in Sections V-VIII is to show the nature and effects of these mistakes, which can make groups error-prone and anything but wise, especially if they lack sufficient diversity.
Or as Carol Craig noted during #IndyRef…
…there are times when it makes sense to be optimistic (or use optimism building techniques if you are prone to pessimism) and times when it is better to be pessimistic. He writes: ‘The fundamental guideline for not deploying optimism is to ask what the cost of failure is in the particular situation.
If the cost of failure is high, optimism is the wrong strategy’.
This may be one reason why ‘smart’ hierarchies still retain a capacity to beat the excessively optimistic and risk taking crowd.
Mick is founding editor of Slugger. He has written papers on the impacts of the Internet on politics and the wider media and is a regular guest and speaking events across Ireland, the UK and Europe. Twitter: @MickFealty