Just a quick share dump, with a few links on the new age of digital politics. The relate to the hype about what the internet can do and what it cannot. First, that story about Facebook likes getting used (on an industrial scale) for cleverly segmented marketing:
Cambridge Analytica has marketed itself as classifying voters using five personality traits known as OCEAN — Openness, Conscientiousness, Extroversion, Agreeableness, and Neuroticism — the same model used by University of Cambridge researchers for in-house, non-commercial research.
The question of whether OCEAN made a difference in the presidential election remains unanswered. Some have argued that big data analytics is
Some have argued that big data analytics is a magic bullet for drilling into the psychology of individual voters; others are more skeptical. The predictive power of Facebook likes is not in dispute. A 2013 study by three of Kogan’s former colleagues at the University of Cambridge showed that likes alone could predict race with 95 percent accuracy and political party with 85 percent accuracy.
Less clear is their power as a tool for targeted persuasion; Cambridge Analytica has claimed that OCEAN scores can be used to drive voter and consumer behavior through “microtargeting,” meaning narrowly tailored messages.
Nix has said that neurotic voters tend to be moved by “rational and fear-based” arguments, while introverted, agreeable voters are more susceptible to “tradition and habits and family and community.”
Dan Gillmor, director of the Knight Center at Arizona State University, said he was skeptical of the idea that the Trump campaign got a decisive edge from data analytics. But, he added, such techniques will likely become more effective in the future. “It’s reasonable to believe that sooner or later, we’re going to see
“It’s reasonable to believe that sooner or later, we’re going to see widespread manipulation of people’s decision-making, including in elections, in ways that are more widespread and granular, but even less detectable than today,” he wrote in an email.
It’s not just in politics that these new digital monopolies are getting used to manage behaviours:
To keep drivers on the road, the company has exploited some people’s tendency to set earnings goals — alerting them that they are ever so close to hitting a precious target when they try to log off.
It has even concocted an algorithm similar to a Netflix feature that automatically loads the next program, which many experts believe encourages binge-watching. In Uber’s case, this means sending drivers their next fare opportunity before their current ride is even over.
And most of this happens without giving off a whiff of coercion.
“We show drivers areas of high demand or incentivize them to drive more,” said Michael Amodeo, an Uber spokesman. “But any driver can stop work literally at the tap of a button — the decision whether or not to drive is 100 percent theirs.”
Uber’s recent emphasis on drivers is no accident. As problems have mounted at the company, from an allegation of sexual harassment in its offices to revelations that it created a tool to deliberately evade regulatory scrutiny, Uber has made softening its posture toward drivers a litmus test of its ability to become a better corporate citizen.
The tension was particularly evident after its chief executive, Travis Kalanick, engaged in a heated argument with a driver that was captured in a viral video obtained by Bloomberg and that prompted an abject apology.
But an examination by The New York Times found that Uber is continuing apace in its struggle to wield the upper hand with drivers.
And as so-called platform-mediated work like driving for Uber increasingly becomes the way people make a living, the company’s example illustrates that pulling psychological levers may eventually become the reigning approach to managing the American worker.
But hold the paranoia. The flip side is the hysteria that comes when algorithms displace human editors/managers/gatekeepers. On Fake News, the Tablet magazine shows that some liberals who protest most loudly about Trump’s transgressions are retaliating in kind:
Since the phenomenon captured public imagination in the wake of Trump’s victory, the term “fake news” has evolved from describing the product of websites deliberately pushing false stories, hoaxes and conspiracy theories to now include pretty much any claim of dubious nature.
Robbed of its original and specific meaning, “fake news” is now used, often sarcastically, to describe any piece of information that someone doesn’t like. For instance, perhaps the greatest-ever beneficiary of fake news—the 45th President of the United States—now regularly calls CNN “fake news.” So too did former United Nations Secretary General Ban Ki Moon label as “fake news” press reports damaging his hopes to become president of South Korea.
Knowingly telling a falsehood used to be called “lying.” Depending on your point of view, that’s most likely what Trita Parsi, the Iranian regime’s most suave dissembler in the West, did when he tweeted, in regard to the Trump administration’s executive order establishing restrictions on travel into the United States by citizens from seven Muslim-majority countries (which happen to be the same seven countries singled out as potential terror threats by President Barack Obama’s Department of Homeland Security), that green-card holders were being “asked their views on Trump” by customs officials at airports over the weekend.
Like many Trump administration critics, Parsi falsely claimed that the executive order amounts to a “Muslim ban,” when the most populous Muslim countries, like Indonesia and Pakistan, are not affected by it at all. Parsi repeated his claim regarding political litmus interrogations on MSNBC, and it was picked up by The Guardian, neither of which bothered to confirm his claims independently.
As journalists get paid less and their are fewer of them, so too does the quality of the product go down, and journalism enters a rabbit hole era where it’s constantly chasing its own relativistic tail.
Joe Brewer (@cognitivepolicy), writing on Facebook this evening noted this:
…the most essential overlooked problem on Earth today is the INABILITY TO DISCERN what is going on in the midst of painful upheavals and social turmoil. The world is too complex and the misinformation campaigns too sophisticated for most people to understand what is really going on.
These are all byproducts of what some glibly call the democratisation of public voice. Some of Onora O’Neill’s insights from her 2002Reith lectures were remarkably acute/prescient:
…high enthusiasm for ever more complete openness and transparency has done little to build or restore public trust. On the contrary, trust seemingly has receded as transparency has advanced. Perhaps on reflection we should not be wholly surprised.
It is quite clear that the very technologies that spread information so easily and efficiently are every bit as good at spreading misinformation and disinformation. Some sorts of openness and transparency may be bad for trust.
She goes on to point out that honest mistakes are far less morally harmful to society than deception:
…deception is the real enemy of trust. Deception is not just a matter of getting things wrong. It can be pretty irritating to be misled by somebody’s honest mistake, but it is not nearly as bad as being their dupe.
Deceivers mislead intentionally, and it is because their falsehood is deliberate, and because it implies a deliberate intention to undermine, damage or distort others’ plans and their capacities to act, that it damages trust and future relationships.
Deception is not a minor or a marginal moral failure. Deceivers do not treat others as moral equals; they exempt themselves from obligations that they rely on others to live up to.
Deception lies at the heart of many serious crimes, including fraud and embezzlement, impersonation and obtaining goods by false pretences, forgery and counterfeiting, perjury and spying, smuggling and false accounting, slander and libel. [Emphasis added]