reader, “how we got to post-truth” texts … · web viewin any case, maryland’s electoral...

28
Some “Anchor” Texts for Unit 3 Table of Contents Reader, “How We Got To Post-Truth”....................1 Lies, propaganda & fake news: A challenge for our age. 7 Soll, “The Long and Brutal History of Fake News”.....13 How to Stop the Spread of Fake News: NYTimes Room for Debate............................................... 17 Reader, “How We Got To Post-Truth” fastcompany.com /3065580/how-we-got-to-post-truth In the groggy aftermath of the 2016 election, slightly more than half of Americans are still in shock. In an effort to understand not only how Trump happened but how they had so underestimated his appeal, editorialists have loudly pointed fingers at Facebook, which, according to a new Pew report, is now used daily by 68% of the American population. Facebook, they write, is uniquely responsible both for a seemingly unmitigated spread of fake news and for serving up news stories that fit users’ own beliefs, trapping us all in our own filter bubbles. Of course, the increasing polarization in America—the shrinking overlaps between conservatives and liberals and between rural and urban America—is larger than the Facebook news feed. But the criticism of the social network highlights a broader problem: not the problem of not enough information, but rather an overabundance of it. Online, we must parse a deluge of data, some of it completely fake, some of it somewhere between fact and falsity. On November 11, for instance, a website called The Conservative Daily Post tweeted a story headlined: “Maryland REFUSING Electoral College, Hillary Given Presidency As More States Follow CLICK HERE.” The story was a lie, based on details of a 2007 decision by Maryland’s electoral college voters to give their votes to the 1

Upload: trinhdang

Post on 21-Aug-2019

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Some “Anchor” Texts for Unit 3

Table of ContentsReader, “How We Got To Post-Truth”.....................................................................1Lies, propaganda & fake news: A challenge for our age..........................................7Soll, “The Long and Brutal History of Fake News”..................................................13How to Stop the Spread of Fake News: NYTimes Room for Debate......................17

Reader, “How We Got To Post-Truth”fastcompany.com /3065580/how-we-got-to-post-truth

In the groggy aftermath of the 2016 election, slightly more than half of Americans are still in shock. In an effort to understand not only how Trump happened but how they had so underestimated his appeal, editorialists have loudly pointed fingers at Facebook, which, according to a new Pew report, is now used daily by 68% of the American population. Facebook, they write, is uniquely responsible both for a seemingly unmitigated spread of fake news and for serving up news stories that fit users’ own beliefs, trapping us all in our own filter bubbles.

Of course, the increasing polarization in America—the shrinking overlaps between conservatives and liberals and between rural and urban America—is larger than the Facebook news feed. But the criticism of the social network highlights a broader problem: not the problem of not enough information, but rather an overabundance of it.

Online, we must parse a deluge of data, some of it completely fake, some of it somewhere between fact and falsity. On November 11, for instance, a website called The Conservative Daily Post tweeted a story headlined: “Maryland REFUSING Electoral College, Hillary Given Presidency As More States Follow CLICK HERE.” The story was a lie, based on details of a 2007 decision by Maryland’s electoral college voters to give their votes to the national popular vote winner. It happened nearly a decade ago, but there it was, being shared on social media as if it were Maryland’s response to the recent election of President Trump. In any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An equal number of commenters seemed to take the news at face value.)

The Conservative Daily Post later removed the tweet, but the retweets, Facebook posts, and the story itself remain online.

“There are lots of folks who are not prepared for this huge onslaught of information and figuring out how to deal with that,” says Craig Silverman, who reports on fake news for Buzzfeed

1

Page 2: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

and revealed teenagers in the Balkans were running fake news sites in support of Donald Trump to make money.

Even President Obama has weighed in on the issue, citing a concern for “democratic freedoms” during a press conference on Thursday with German chancellor Angela Merkel. “If we are not serious about the facts and what’s true and what’s not,” he said, “particularly in the social media era when so many get information from sound bites and snippets off their phone, if we can’t discriminate between serious arguments and propaganda, then we have problems.”

When It’s All “Content” In the last fifteen years blogging has changed the way news is written, reported, and distributed. No longer do writers and pundits require a printing press or much financial wherewithal—they simply need a computer and an internet connection. While this diversification has brought about a smorgasbord of new points of view and methods of reporting, it has also opened the floodgates to a spectrum of unvetted information providers. As journalism has expanded online, pontificators and gossips have proliferated. And as eyeballs have moved online too, local news organizations—especially those in rural, small and medium-sized localities—have suffered precipitous drops in advertising, meaning layoffs, severely cut resources, and outright shutdowns.

At the same time, point-of-view journalism, wherein reporters intimate or disclose their subjective opinions or their stance on particular issues, began taking hold. The rejection of the view-from-nowhere style of reporting, an attempt to appear unbiased in factual presentations, may have helped to foster an environment in which readers mistrust reporters who have a personal view on a given topic. Meanwhile, online and elsewhere, news blends together with entertainment—a response perhaps to failing revenue and a constant need for pageviews. On a variety of websites geared toward younger, largely progressive audiences, content—a word that now slyly encompasses reported story, advertising plugs, opinion and just about anything that resides on the web—shares much of the same tone. Blogs have blossomed into full blown media companies and brought with them a sarcastic, edgy voice that can undermine their credibility. Fake or poorly reported news has adopted a tone of authority. The result is a jostling media landscape where the conspiracy theorists become indistinguishable from the credible sources.

“The whole flow of the information is a lot more conversational and a lot more decentralized,” says Rich Edmonds, media analyst at the Poynter Institute.

Even if internet-based reporting has left its mark on other forms of media, it’s not the only game in town: TV still rules. “When you take a step back, you have to recall that television is such a huge staple in the typical American media diet,” says Jesse Holcomb at Pew Research. While 65% of those polled earlier this year turned to the web for their news on the election, 78% of people watched television to get the latest on Trump and Clinton. Overall, 57% of people frequently flick on the tube to find out what’s going on in the world. That’s compared with the 38% of Americans who regularly get news from social media.

But social media’s role in the news cycle is undeniably outsized, and as a source of news, it’s growing quickly: according to the recent Pew report, daily Facebook use among U.S. adults rose from 70 to 76 percent in the past year. And, says Pew, the influence of those posts are growing too: 20 percent of survey respondents said social media has altered their position on a political issue and 17 percent say it has changed their view of a specific candidate, with Democrats more likely to report a change in views than Republicans.

News Feeds Nudged Out Newspapers Of course, the news a person ends up receiving depends greatly on where they go looking for it. Newspapers used to be the gatekeepers of information, separating out the nonsense from the stuff of import. But they no longer occupy the grand role they

2

Page 3: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

once did. Meanwhile, readers have turned to the places where they increasingly get the rest of their information, often for free: from the web and their phones. Online platforms, seeking ever more engagement and higher ad revenues, are more than happy to oblige. Earlier this year, Facebook launched Instant Articles in order to directly host publisher content and better serve it to users, and Google’s AMP service now offers publishers a similar feature in its search results.

“We’re seeing a whole ratcheting up of how much consumption of news has moved to Google, Facebook, etc,” says Edmonds. On the whole, 14% of people found social media to be the most helpful source for catching up on the election, according to Pew. Roughly the same percentage of people thought news websites were the most helpful resources for understanding the election cycle. There is also a lot of overlap when it comes to where people are sourcing their news: Pew reports that nearly half of all Americans have more than five sources they turn to for election coverage.

3

Page 4: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Echo Chambers, Built By The Invisible Hand The rise of social media platforms as curators and distributors of news content has raised new questions about what civic responsibility they have and whether these gatekeepers should be filtering out factually inaccurate articles or reducing the “echo chamber” effect.

The answers don’t seem immediately apparent. In the Washington Post, fake news blogger Paul Horner admits to not only making up stories that spread virulently online, but to creating fake source material for other stories. Much of it, he said, was quickly picked up by supporters of the president-elect. “I think Trump is in the White House because of me. His followers don’t fact-check anything—they’ll post everything, believe anything. His campaign manager posted my story about a protester getting paid $3,500 as fact. Like, I made that up. I posted a fake ad on Craigslist,” he told the Post.

In a quick rebuttal to Trump’s clinching of the presidential seat, Max Read called out Facebook for promoting fake news and aiding Trump’s rise to power. Many others piled on. In response, Facebook CEO Mark Zuckerberg called the notion that fake news on the platform in any way swayed the election a “crazy idea.” Zuckerberg further cemented his feelings in a post, claiming that less than 1% of content on Facebook is fake news or deceptive content. (Of course, notwithstanding Facebook’s recent efforts to be more transparent with advertisers, much of its other data cannot be verified by outside auditors.)

But as others have pointed out, the number of misleading stories on Facebook is less important than how far that content spreads, and how many people are affected by it. Facebook’s own research has demonstrated how effective it is at, for instance, encouraging people to vote or influencing users’ emotions. A Buzzfeed report this week details how the 20 top fake news stories on Facebook received more engagement than the top 20 mainstream news stories—part of a trend that seems to have begun after Facebook tweaked its algorithms to prioritize posts from friends. On Twitter, meanwhile, researchers at USC who examined a sampling of election tweets found that 1 in 5 were sent by a bot.

Furthermore, social networks make it easy to insulate ourselves from people and information that don’t align with what we believe in. And even when we choose not to block out conflicting opinions and people from our feeds, algorithms can do it for us, turning our social feeds into a monotone hum of if-you-like-this-you’ll-like-that news and commentary.

“What we have that is new,” says Silverman, “is massive global platforms connected to the internet where stuff can move with great velocity and get to a lot of people very quickly, and then on top of that, when you throw in the element of algorithms on Facebook mediating what people are seeing and feeding them more of what they’ve already been consuming, you can get this self-reinforcing bubble that people live in.”

Offline Filters The bubbles are not limited to the web. Committed liberals tend to prefer city-living and suburban environments, while staunch conservatives like living in more spacious rural areas and small towns, according to Pew. Of course, there are people who fall in between, but even they skew along these lines, with those more casual liberals largely choosing suburbs over smalls towns and vice versa. Those who don’t identify consistently with a party are fairly evenly distributed across these four landscapes. (While nearly 40% of Americans identify as unaffiliated independent voters, Pew reports that only 13% of citizens truly don’t lean toward a particular party.)

Whether because people are moving to destinations with similar attitudes, or because of growing access to a spectrum of content and the increased polarization of viewpoints in the news, people of different ideologies have less respect for one another than ever. “For the first time in surveys dating back more than two decades,” reported a July Pew report, majorities of Republicans (58%) and

4

Page 5: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Democrats (55%) say they have a very unfavorable view of the opposing party. In 1994, fewer than half as many Republicans (21%) and Democrats (17%) expressed highly negative views of the other party.”

People with certain political leanings are learning to consult one set of sources and distrust others. While those who consistently identify as liberal tend to trust popular media, those who consider themselves conservative tend to trust very few news sources, according to a 2014 Pew analysis. The alignments are fairly unsurprising: conservatives trust Fox News, Breitbart, and Rush Limbaugh. Liberals trust a spectrum of news sources including the New York Times, Washington Post, NPR, BBC, CNN. News on social media sites is the least trusted source. “Only 4% of web-using adults have a lot of trust in the information they find on social media. And that rises to only 7% among those who get news on these sites,” reports Pew. In his campaign, Trump leveraged uncertainty about the mainstream news, constantly working in his “lying media” mantra into stump speeches.

“I don’t think hammering the media was the cornerstone of Trump’s whole campaign and appeal, but it fit right in,” says Edmonds. “[He] encouraged his backers to think … the media is part of the establishment that they want a break with.” As a result, even legitimate media critiques of Trump may have only helped to steel the beliefs of Trump supporters and feed the narrative that the media was out of touch.

For those who would ordinarily fall in the middle, there is more information than ever from which to draw an opinion. The Internet has expanded geographic clusters to people with similar ideas across the globe. It also grants more access to opposing news and views that can be scary. One of the most resounding refrains from Trump supporters (elegantly highlighted in a recent episode of This American Life) is that change (whether the move to clean energy or the immigration of refugees to the United States) is happening too fast. That may in part be a response not only to the change that’s happening in their communities, but also progressions they perceive from news and ideas captured in their various social feeds. In an era of when we can be so choosy about the news we get, occasional access to both far right and far left viewpoints may be scaring moderates, causing them to retreat into even more extreme silos.

5

Page 6: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Draining The Swamp (Of Fake News) As a long term strategy toward cleaning up a polluted lake of information, Silverman suggests that schools should be teaching students how to be more media savvy and give them tools to pick out misinformation from verified reportage. In the future, we’ll all need to be our own editors, sussing out the real from the fake. In the near term, he suggests that web platforms embrace human curation in addition to algorithms. The more obvious hoax news is fairly easy to identify and could be tagged as such on Facebook and Twitter. He also thinks Facebook’s algorithm could be tweaked to lessen the impact of fake news. For instance, if a story is being shared amongst a small pocket of ideologically similar users, perhaps the network could refrain from blasting it out to a wider audience.

For services like Facebook, Google, Reddit, and Twitter, a certain amount of human curation is required to ensure that quality content—not just whatever has the most engagement—gets the widest circulation. (Ironically, earlier this year, Facebook fired its team of Trending Topics curators not long after right-wing pundits complained of liberal bias in the articles they were surfacing.)

“The fact that increasingly, Mark Zuckerberg has had to address this and senior executives have had to address this means that they’re feeling a sense of scrutiny and pressure,” says Silverman, “and I suspect they’re having a lot of conversations internally about what they’re going to do. So it’s going to be interesting.”

In the days since we spoke, Google and Facebook have publicly announced initial steps to quelch fake news sites. Both companies have issued a ban on fake news from using its ad-platform—a move that could help defund these kinds of sites. Teams of humans will now vet the publications that seek ad promotion on two very important gatekeepers.

6

Page 7: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

But this is only the beginning of a much larger effort to keep less than quality information from reaching the most possible eyes. And there is a possibility that the news outlook won’t get any clearer. Trump has appointed Steve Bannon, executive chairman of fake news site Breitbart known for its bigoted and sometimes false news and opinion stories, to be his chief strategist. There’s worry that Bannon’s media background might help the Trump administration coordinate a well-run propaganda machine. It’s a concern that should make efforts to combat misinformation that much more urgent.

Lies, propaganda & fake news: A challenge for our agebbc.com /future/story/20170301-lies-propaganda-and-fake-news-a-grand-challenge-of-our-age

By Richard Gray 1 March 2017

Who was the first black president of America? It’s a fairly simple question with a straightforward answer. Or so you would think. But plug the query into a search engine and the facts get a little fuzzy.

We may have things better than ever – but we’ve also never faced such world-changing challenges. That’s why Future Now asked 50 experts – scientists, technologists, business leaders and entrepreneurs – to name what they saw as the key challenges in their area.

The range of different responses demonstrate the richness and complexity of the modern world. Inspired by these responses, over the next month we will be publishing a series of feature articles and videos that take an in-depth look at the biggest challenges we face today.

When I checked Google, the first result – given special prominence in a box at the top of the page – informed me that the first black president was a man called John Hanson in 1781. Apparently, the US has had seven black presidents, including Thomas Jefferson and Dwight Eisenhower. Other search engines do little better. The top results on Yahoo and Bing pointed me to articles about Hanson as well.

Welcome to the world of “alternative facts”. It is a bewildering maze of claim and counterclaim, where hoaxes spread with frightening speed on social media and spark angry backlashes from people who take what they read at face value. Controversial, fringe views about US presidents can be thrown centre stage by the power of search engines. It is an environment where the mainstream media is accused of peddling “fake news” by the most powerful man in the world. Voters are seemingly misled by the very politicians they elected and even scientific research - long considered a reliable basis for decisions - is dismissed as having little value.

For a special series launching this week, BBC Future Now asked a panel of experts about the grand challenges we face in the 21st Century – and many named the breakdown of trusted sources of information as one of the most pressing problems today. In some ways, it’s a challenge that trumps all others. Without a common starting point – a set of facts that people with otherwise different viewpoints can agree on – it will be hard to address any of the problems that the world now faces.

Having a large number of people in a society who are misinformed is absolutely devastating and extremely difficult to cope with – Stephan Lewandowsky, University of Bristol

7

Page 8: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

The example at the start of this article may seem a minor, frothy controversy, but there is something greater at stake here. Leading researchers, tech companies and fact-checkers we contacted say the threat posed by the spread of misinformation should not be underestimated.

Take another example. In the run-up to the US presidential elections last year, a made-up story spread on social media claimed a paedophile ring involving high-profile members of the Democratic Party was operating out of the basement of a pizza restaurant in Washington DC. In early December a man walked into the restaurant - which does not have a basement - and fired an assault rifle. Remarkably, no one was hurt.

Some warn that “fake news” threatens the democratic process itself. “On page one of any political science textbook it will say that democracy relies on people being informed about the issues so they can have a debate and make a decision,” says Stephan Lewandowsky, a cognitive scientist at the University of Bristol in the UK, who studies the persistence and spread of misinformation. “Having a large number of people in a society who are misinformed and have their own set of facts is absolutely devastating and extremely difficult to cope with.”

A survey conducted by the Pew Research Center towards the end of last year found that 64% of American adults said made-up news stories were causing confusion about the basic facts of current issues and events.

Alternative historiesWorking out who to trust and who not to believe has been a facet of human life since our ancestors began living in complex societies. Politics has always bred those who will mislead to get ahead.

But the difference today is how we get our information. “The internet has made it possible for many voices to be heard that could not make it through the bottleneck that controlled what would be distributed before,” says Paul Resnick, professor of information at the University of Michigan. “Initially, when they saw the prospect of this, many people were excited about this opening up to multiple voices. Now we are seeing some of those voices are saying things we don’t like and there is great concern about how we control the dissemination of things that seem to be untrue.”

There is great concern about how we control the dissemination of things that seem to be untrue – Paul Resnick, University of Michigan

We need a new way to decide what is trustworthy. “I think it is going to be not figuring out what to believe but who to believe,” says Resnick. “It is going to come down to the reputations of the sources of the information. They don’t have to be the ones we had in the past.”

We’re seeing that shift already. The UK’s Daily Mail newspaper has been a trusted source of news for many people for decades. But last month editors of Wikipedia voted to stop using the Daily Mail as a source for information on the basis that it was “generally unreliable”.

Yet Wikipedia itself - which can be edited by anyone but uses teams of volunteer editors to weed out inaccuracies - is far from perfect. Inaccurate information is a regular feature on the website and requires careful checking for anyone wanting to use it.

For example, the Wikipedia page for the comedian Ronnie Corbett once stated that during his long career he played a Teletubby in the children’s TV series. This is false but when he died the statement cropped up in some of his obituaries when writers resorted to Wikipedia for help.

8

Page 9: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Several obituaries for the comedian Ronnie Corbett falsely claimed he had once played a Teletubby because this statement appeared in his Wikipedia entry (Credit: Getty Images)

Other than causing offense or embarrassment – and ultimately eroding a news organisation’s standing - these sorts of errors do little long-term harm. There are some who care little for reputation, however. They are simply in it for the money. Last year, links to websites masquerading as reputable sources started appearing on social media sites like Facebook. Stories about the Pope endorsing Donald Trump’s candidacy   and Hillary Clinton being indicted for crimes related to her email scandal were shared widely despite being completely made up.

“The major new challenge in reporting news is the new shape of truth,” says Kevin Kelly, a technology author and co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact. All those counterfacts and facts look identical online, which is confusing to most people.”

For every fact there is a counterfact and all those counterfacts and facts look identical online – Kevin Kelly, co-founder Wired magazine

For those behind the made-up stories, the ability to share them widely on social media means a slice of the advertising revenue that comes from clicks as people follow the links to their webpages. It was found that many of the stories were coming from a small town in Macedonia where young people were using it as a get-rich scheme, paying Facebook to promote their posts and reaping the rewards of the huge number visits to their websites.

“The difference that social media has made is the scale and the ability to find others who share your world view,” says Will Moy, director of Full Fact, an independent fact-checking organisation based in the UK. “In the past it was harder for relatively fringe opinions to get their views reinforced. If we were chatting around the kitchen table or in the pub, often there would be a debate.”

But such debates are happening less and less. Information spreads around the world in seconds, with the potential to reach billions of people. But it can also be dismissed with a flick of the finger. What we choose to engage with is self-reinforcing and we get shown more of the same. It results in an exaggerated “echo chamber” effect.

People are quicker to assume they are being lied to but less quick to assume people they agree with are lying, which is a dangerous tendency – Will Moy, director of Full Fact

9

Page 10: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

“What is noticeable about the two recent referendums in the UK - Scottish independence and EU membership - is that people seem to be clubbing together with people they agreed with and all making one another angrier,” says Moy. “The debate becomes more partisan, more angry and people are quicker to assume they are being lied to but less quick to assume people they agree with are lying. That is a dangerous tendency.”

The challenge here is how to burst these bubbles. One approach that has been tried is to challenge facts and claims when they appear on social media. Organisations like Full Fact, for example, look at persistent claims made by politicians or in the media, and try to correct them. (The BBC also has its own fact-checking unit, called Reality Check.)

Research by Resnick suggests this approach may not be working on social media, however. He has been building software that can automatically track rumours on Twitter, dividing people into those that spread misinformation and those that correct it. “For the rumours we looked at, the number of followers of people who tweeted the rumour was much larger than the number of followers of those who corrected it,” he says. “The audiences were also largely disjointed. Even when a correction reached a lot of people and a rumour reached a lot of people, they were usually not the same people. The problem is, corrections do not spread very well.”

One example of this that Resnick and his team found was a mistake that appeared in a leaked draft of a World Health Organisation report that stated many people in Greece who had HIV had infected themselves in an attempt to get welfare benefits. The WHO put out a correction, but even so, the initial mistake reached far more people than the correction did. Another rumour suggested the rapper Jay Z had died and reached 900,000 people on Twitter. Around half that number were exposed to the correction. But only a tiny proportion were exposed to both the rumour and correction.

This lack of overlap is a specific challenge when it comes to political issues. Moy fears the traditional watchdogs and safeguards put in place to ensure those in power are honest are being circumvented by social media.

“On Facebook political bodies can put something out, pay for advertising, put it in front of millions of people, yet it is hard for those not being targeted to know they have done that,” says Moy. “They can target people based on how old they are, where they live, what skin colour they have, what gender they are. We shouldn’t think of social media as just peer-to-peer communication - it is also the most powerful advertising platform there has ever been.”

But it may count for little. “We have never had a time when it has been so easy to advertise to millions of people and not have the other millions of us notice,” he says.

Twitter and Facebook both insist they have strict rules on what can be advertised and particularly on political advertising. Regardless, the use of social media adverts in politics can have a major impact. During the run up to the EU referendum, the Vote Leave campaign paid for nearly a billion targeted digital adverts, mostly on Facebook, according to one of its campaign managers. One of those was the claim that the UK pays £350m a week to the EU - a figure Sir Andrew Dilnot, the chair of the UK Statistics Authority, described as misleading. In fact the UK pays around £276m a week to the EU because of a rebate.

“We need some transparency about who is using social media advertising when they are in election campaigns and referendum campaigns,” says Moy. “We need to be more equipped to deal with this - we need watchdogs that will go around and say, ‘Hang on, this doesn’t stack up’ and ask for the record to be corrected.”

10

Page 11: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Social media sites themselves are already taking steps. Mark Zuckerberg, founder of Facebook, recently spelled out his concerns about the spread of hoaxes, misinformation and polarisation on social media in a 6,000-word letter he posted online. In it he said Facebook would work to reduce sensationalism in its news feed on its site by looking at whether people have read content before sharing it. It has also updated its advertising policies to reduce spam sites that profit off fake stories, and added tools to let users flag fake articles.

Other tech giants also claim to be taking the problem seriously. Apple’s Tim Cook recently raised concerns about fake news, and Google says it is working on ways to improve its algorithms so they take accuracy into account when displaying search results. “Judging which pages on the web best answer a query is a challenging problem and we don’t always get it right,” says Peter Barron, vice president of communications for Europe, Middle East and Asia at Google.

“When non-authoritative information ranks too high in our search results, we develop scalable, automated approaches to fix the problems, rather than manually removing these one by one. We recently made improvements to our algorithm that will help surface more high quality, credible content on the web. We’ll continue to change our algorithms over time in order to tackle these challenges.”

For Rohit Chandra, vice president of engineering at Yahoo, more humans in the loop would help. “I see a need in the market to develop standards,” he says. "We can’t fact-check every story, but there must be enough eyes on the content that we know the quality bar stays high.” 

Google is also helping fact-checking organisations like Full Fact, which is developing new technologies that can identify and even correct false claims. Full Fact is creating an automated fact-checker   that will monitor claims made on TV, in newspapers, in parliament or on the internet.

Initially it will be targeting claims that have already been fact-checked by humans and send out corrections automatically in an attempt to shut down rumours before they get started. As artificial intelligence gets smarter, the system will also do some fact-checking of its own.

“For a claim like ‘crime is rising’, it is relatively easy for a computer to check,” says Moy. “We know where to get the crime figures and we can write an algorithm that can make a judgement about whether crime is rising. We did a demonstration project last summer to prove we can automate the checking of claims like that. The challenge is going to be writing tools that can check specific types of claims, but over time it will become more powerful.”

What would Watson do?

It is an approach being attempted by a number of different groups around the world. Researchers at the University of Mississippi and Indiana University are both working on an automated fact-checking system. One of the world’s most advanced AIs has also had a crack at tackling this problem. IBM has spent several years working on ways that its Watson AI could help internet users distinguish fact from fiction. They built a fact-checker app that could sit in a browser and use Watson’s language skills to scan the page and give a percentage likelihood of whether it was true. But according to Ben Fletcher, senior software engineer at IBM Watson Research who built the system, it was unsuccessful in tests - but not because it couldn’t spot a lie.

“We got a lot of feedback that people did not want to be told what was true or not,” he says. “At the heart of what they want, was actually the ability to see all sides and make the decision for themselves. A major issue most people face without knowing it is the bubble they live in. If they were shown views outside that bubble they would be much more open to talking about them.”

11

Page 12: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

This idea of helping break through the isolated information bubbles that many of us now live in comes up again and again. By presenting people with accurate facts it should be possible to at least get a debate going. But telling people what is true and what is not does not seem to work. For this reason, IBM shelved its plans for a fact-checker.

“There is a large proportion of the population in the US living in what we would regard as an alternative reality,” says Lewandowsky. “They share things with each other that are completely false. Any attempt to break through these bubbles is fraught with difficulty as you are being dismissed as being part of a conspiracy simply for trying to correct what people believe. It is why you have Republicans and Democrats disagreeing over something as fundamental as how many people appear in a photograph.”

One approach Lewandowsky suggests is to make search engines that offer up information that may subtly conflict with a user’s world view. Similarly, firms like Amazon could offer up films and books that provide an alternative viewpoint to the products a person normally buys.

“By suggesting things to people that are outside their comfort zone but not so far outside they would never look at it you can keep people from self-radicalising in these bubbles,” says Lewandowsky. “That sort of technological solution is one good way forward. I think we have to work on that.”

Google is already doing this to some degree. It operates a little known grant scheme that allows certain NGOs to place high-ranking adverts in response to certain searches. It is used by groups like the Samaritans so their pages rank highly in a search by someone looking for information about suicide, for example. But Google says anti-radicalisation charities could also seek to promote their message on searches about so-called Islamic State, for example.

But there are understandable fears about powerful internet companies filtering what people see - even within these organisations themselves. For those leading the push to fact-check information, better tagging of accurate information online would be a better approach by allowing people to make up their own minds about the information.

“Search algorithms are as flawed as the people who develop them,” says Alexios Mantzarlis, director of the International Fact-Checking Network. “We should think about adding layers of credibility to sources. We need to tag and structure quality content in effective ways.”

Mantzarlis believes part of the solution will be providing people with the resources to fact-check information for themselves. He is planning to develop a database of sources that professional fact-checkers use and intends to make it freely available.

But what if people don’t agree with official sources of information at all? This is a problem that governments around the world are facing as the public views what they tell them with increasing scepticism.

Nesta, a UK-based charity that supports innovation, has been looking at some of the challenges that face democracy in the digital era and how the internet can be harnessed to get people more engaged. Eddie Copeland, director of government innovation at Nesta, points to an example in Taiwan where members of the public can propose ideas and help formulate them into legislation. “The first stage in that is crowdsourcing facts,” he says. “So before you have a debate, you come up with the commonly accepted facts that people can debate from.”

12

Page 13: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

But that means facing up to our own bad habits. “There is an unwillingness to bend one’s mind around facts that don’t agree with one’s own viewpoint,” says Victoria Rubin, director of the language and information technology research lab at Western University in Ontario, Canada. She and her team have been working to identify fake news on the internet since 2015. Will Moy agrees. He argues that by slipping into lazy cynicism about what we are being told, we allow those who lie to us to get away with it. Instead, he thinks we should be interrogating what they say and holding them to account.

Ultimately, however, there’s an uncomfortable truth we all need to address. “When people say they are worried about people being misled, what they are really worried about is other people being misled,” says Resnick. “Very rarely do they worry that fundamental things they believe themselves may be wrong.” Technology may help to solve this grand challenge of our age, but it is time for a little more self-awareness too.

Soll, “The Long and Brutal History of Fake News”politico.com /magazine/story/2016/12/fake-news-history-long-violent-214535

The fake news hit Trent, Italy, on Easter Sunday, 1475. A 2 ½-year-old child named Simonino had gone missing, and a Franciscan preacher, Bernardino da Feltre, gave a series of sermons claiming that the Jewish community had murdered the child, drained his blood and drunk it to celebrate Passover. The rumors spread fast. Before long da Feltre was claiming that the boy’s body had been found in the basement of a Jewish house. In response, the Prince-Bishop of Trent Johannes IV Hinderbach immediately ordered the city’s entire Jewish community arrested and tortured. Fifteen of them were found guilty and burned at the stake. The story inspired surrounding communities to commit similar atrocities.

Recognizing a false story, the papacy intervened and attempted to stop both the story and the murders. But Hinderbach refused to meet the papal legate, and feeling threatened, simply spread more fake news stories about Jews drinking the blood of Christian children. In the end, the popular fervor supporting these anti-semitic “blood libel” stories made it impossible for the papacy to interfere with Hinderbach, who had Simonino canonized—Saint Simon—and attributed to him a hundred miracles. Today, historians have catalogued the fake stories of child-murdering, blood-drinking Jews, which have existed since the 12th century as part of the foundation of anti-Semitism. And yet, one anti-Semitic website still claims the story is true and Simon is still a martyred saint. Some fake news never dies.

Over the past few months, “fake news” has been on the loose once again. From bogus stories about Hillary Clinton’s imminent indictment to myths about a postal worker in Ohio destroying absentee ballots cast for Donald Trump, colorful and damaging tales have begun to circulate rapidly and widely on Twitter and Facebook. In some cases they have had violent results: Earlier this month a man armed with an AR-15 fired a shot inside a Washington, D.C., restaurant, claiming to be investigating (fake) reports that Clinton aide John Podesta was heading up a child abuse ring there.

But amid all the media handwringing about fake news and how to deal with it, one fact seems to have gotten lost: Fake news is not a new phenomenon. It has been around since news became a concept 500 years ago with the invention of print—a lot longer, in fact, than verified, “objective” news, which emerged in force a little more than a century ago. From the start, fake news has tended to be sensationalist and extreme, designed to inflame passions and prejudices. And it has often provoked violence. The Nazi propaganda machine relied on the same sorts of fake stories about

13

Page 14: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

ritual Jewish drinking of childrens’ blood that inspired Prince-Bishop Hinderbach in the 15th century. Perhaps most dangerous is how terrifyingly persistent and powerful fake news has proved to be. As Pope Sixtus IV found out, wild fake stories with roots in popular prejudice often prove too much for responsible authorities to handle. With the decline of trusted news establishments around the country, who’s to stop them today?

***

Fake news took off at the same time that news began to circulate widely, after Johannes Gutenberg invented the printing press in 1439. “Real” news was hard to verify in that era. There were plenty of news sources—from official publications by political and religious authorities, to eyewitness accounts from sailors and merchants—but no concept of journalistic ethics or objectivity. Readers in search of fact had to pay close attention. In the 16th century, those who wanted real news believed that leaked secret government reports were reliable sources, such as Venetian government correspondence, known as relazioni. But it wasn’t long before leaked original documents were soon followed by fake relazioni leaks. By the 17th century, historians began to play a role in verifying the news by publishing their sources as verifiable footnotes. The trial over Galileo’s findings in 1610 also created a desire for scientifically verifiable news and helped create influential scholarly news sources.

But as printing expanded, so flowed fake news, from spectacular stories of sea monsters and witches to claims that sinners were responsible for natural disasters. The Lisbon Earthquake of 1755 was one of the more complex news stories of all time, with the church and many European authorities blaming the natural disaster on divine retribution against sinners. An entire genre of fake news pamphlets (relações de sucessos) emerged in Portugal, claiming that some survivors owed their lives to an apparition of the Virgin Mary. These religiously inspired accounts of the earthquake sparked the famed Enlightenment philosopher Voltaire to attack religious explanations of natural events, and also made Voltaire into an activist against fake religious news.

There was a lot of it in that era. When, in 1761, Marc-Antoine Calas, the 22-year-old son of a respected Protestant merchant in Toulouse, apparently committed suicide, Catholic activists spread news stories that Calas’ father, Jean, had killed him because he wanted to convert to Catholicism. The local judicial authorities posted signs calling for legal witnesses to corroborate the account, successfully turning rumors into official facts, and, in turn, official news.

Jean Calas was convicted on the rumor-fueled testimony and was publicly and gruesomely tortured before being executed. Horrified at the atrocity, Voltaire wrote his own counterattacks dissecting the absurdity that young Calas would have a full understanding of the meaning of conversion and that his peaceable father would hang him for it. The Calas story eventually sparked outrage against such fake legal stories, torture and even execution. It became a touchstone for the Enlightenment itself.

Yet even the scientific revolution and the Enlightenment could not stop the flow of fake news. For example, in the years preceding the French Revolution, a cascade of pamphlets appeared in Paris exposing for the first time the details of the near-bankrupt government’s spectacular budget deficit. Each came from a separate political camp, and each contradicted the other with different numbers, blaming the deficit on different finance ministers. Eventually, through government leaks and more and more verifiable accounts, enough information was made public for readers to glean a general sense of state finance; but, like today, readers had to be both skeptical and skilled to figure out the truth.

Even our glorified Founders were perpetrators of fake news for political means. To whip up revolutionary fervor, Ben Franklin himself concocted propaganda stories about murderous “scalping”

14

Page 15: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Indians working in league with the British King George III. Other revolutionary leaders published fake propaganda stories that King George was sending thousands of foreign soldiers to slaughter the American patriots and turn the tide of the War of Independence to get people to enlist and support the revolutionary cause.

By the 1800s, fake news was back again, swirling around questions of race. Like Jewish blood libel, American racial sentiments and fears were powerful in producing false stories. One persistent “cottage industry” of fake news in antebellum America was stories of African-Americans spontaneously turning white. In other instances, fake news reports of slave uprisings or of crimes by slaves, led to terrible violence against African-Americans.

Sensationalism always sold well. By the early 19th century, modern newspapers came on the scene, touting scoops and exposés, but also fake stories to increase circulation. The New York Sun’s “Great Moon Hoax” of 1835 claimed that there was an alien civilization on the moon, and established the Sun as a leading, profitable newspaper. In 1844, anti-Catholic newspapers in Philadelphia falsely claimed that Irishmen were stealing bibles from public schools, leading to violent riots and attacks on Catholic churches. During the Gilded Age, yellow journalism flourished, using fake interviews, false experts and bogus stories to spark sympathy and rage as desired. Joseph Pulitzer’s New York World published exaggerated crime dramas to sell papers. In the 1890s, plutocrats like William Randolph Hearst and his Morning Journal used exaggeration to help spark the Spanish-American War. When Hearst’s correspondent in Havana wired that there would be no war, Hearst—the inspiration for Orson Welles' Citizen Kane—famously responded: “You furnish the pictures, I’ll furnish the war.” Hearst published fake drawings of Cuban officials strip-searching American women—and he got his war.

One silver lining in this long and alarming history of fake news is yellow journalism and its results—from civil violence to war—caused a backlash, and sent the public in search of more objective news. It was this flourishing market that sparked the rise of relatively objective journalism as an industry in turn-of-the century America. For the first time, American papers hired reporters to cover local beats and statehouses, building a chain of trust between local, state and national reporters and the public.

While partisan reporting and sensationalism never went away (just check out supermarket newsstands), objective journalism did become a successful business model—and also, until recently, the dominant one. In 1896, Adolph Ochs purchased the New York Times, looking to produce a “facts”-based newspaper that would be useful to the wealthy investor class by providing reliable business information and general news. Ochs showed that news did not have to be sensationalist to be profitable, though the paper was accused   of being a mouthpiece for “bondholders.”

Of course, the objective journalism consensus had its hiccups. With the advent of World War II, and in light of the Nazi and Communist propaganda machines, there was concern about the U.S. government’s wartime involvement in producing news propaganda. In the 1950s, Joseph McCarthy was accused of manipulating reporters like “Pavlov’s dogs,” but a New Yorker article from the period insisted that reporters should report and not “tell readers which ‘facts’ are really ‘facts’ and which are not.” By the 1960s, a new generation of reporters signed on to report on “non establishment” stories. Many of these reporters questioned the very ideal of objectivity, yet, nonetheless, hewed to the basic guiding principle of reporting based on verifiable and reputable sources.

***

It wasn’t until the rise of web-generated news that our era’s journalistic norms were seriously challenged, and fake news became a powerful force again. Digital news, you might say, has brought

15

Page 16: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

yellow journalism back to the fore. For one, algorithms that create news feeds and compilations have no regard for accuracy and objectivity. At the same time, the digital news trend has decimated the force—measured in both money and manpower—of the traditional, objectively minded, independent press.

The Pew Research Center’s “State of the Media 2016” paints a grim picture for most serious news organizations. Advertising revenue is down; staffs continue to get cut; the number of newspapers has declined by 100 since 2004. Between 2003 and 2014, with the decline of the printed press, the number of professional statehouse reporters dropped 35 percent. Professional local beat reporters are also a dying breed. These figures, trained in basic journalistic principles, were locally known and trusted. They could be found in bars and local schools and acted as the human link between statehouses, Washington, D.C., and the U.S. population. They were seen as local heroes. (Jimmy Stewart often played truth-obsessed newspaper reporters in films, like the 1948 thriller Call Northside 777.) But today, these popular role models and societal links are gone, and with them, a trusted filter within civil society—the sort of filter that can say with authority to fellow local citizens that fake news is not only fake, it is also potentially deadly.

Real news is not coming back in any tangible way on a competitive local level, or as a driver of opinion in a world where the majority of the population does not rely on professionally reported news sources and so much news is filtered via social media, and by governments. And as real news recedes, fake news will grow. We’ve seen the terrifying results this has had in the past—and our biggest challenge will be to find a new way to combat the rising tide.

How to Stop the Spread of Fake News: NYTimes Room for Debate

Facebook Must Acknowledge and Change Its Financial IncentivesRobyn Caplan is the lead researcher for the Algorithms and Publics/Reimagining Accountability project at Data & Society Research Institute. UPDATED NOVEMBER 22, 2016, 3:20 AM

The problem of “fake news” is not necessarily a problem of technology. Rather, it is linked to the economic and organizational incentives behind Facebook’s News Feed algorithm.

For Facebook to fully tackle fake news, it must acknowledge that financial and political incentives drive the company to privilege the act of sharing over dissemination of truthful coverage. Facebook also must own the fact that it is now a major news distributor. It doesn’t need to just tweak its algorithm, it needs to tweak its business practices and objectives.

In the last few decades, technology companies have boasted about their "disruption" of industries like the news media. Now that the consequences of this disruption are becoming clear, platforms have a responsibility to engage on an ongoing basis with domain-experts, such as legacy media organizations and professionals, to learn about journalistic ethics — and preserve them.

The news media industry’s financial dependence on Facebook for content distribution has, in many cases, weakened the reach of solid journalism. The idea that the role of journalism now is to "give

16

Page 17: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

people what they want" or "what matters to them" has become embedded in the same logic that drive Facebook’s algorithmic personalization and ad-targeting products.

Media organizations were especially left behind when Facebook changed its algorithm in June to privilege friends and family over major publishers, which happens to have occurred just prior to the spike in fake news before the election.

Facebook must create institutionalized pathways for journalists and policymakers to help shape any further changes to the algorithm. First steps could include more transparency about the business model driving these changes, incorporating opportunities for comment from members of civil society and the news industry, and creating an internal team dedicated to media ethics concerns, with an explicit mission statement driven by values rather than increasing clicks and views.In the U.S., where the authority of traditional media institutions have been on the wane since the rise of the cable news era, turning to the Federal Communications Commission for regulation may not prove effective. However, Facebook is a global company, and other countries, such as Germany, are becoming concerned about the potential for Facebook to influence their own political process.The United States and other countries may be able to mobilize policies and commissions used in the broadcast and cable eras to emphasize media diversity, impartiality and accuracy, but ultimately it is incumbent on Facebook to change its business model.

Algorithms Could Help Social Media Users Spot Fake Newsnytimes.com /roomfordebate/2016/11/22/how-to-stop-the-spread-of-fake-news/algorithms-could-

help-social-media-users-spot-fake-news Annemarie Dooling, the director of programming for Vox Media's Racked.com, is an online communities expert who has worked with Salon, AOL and The Huffington Post. She is on Twitter (@TravelingAnna).

All discussion on the role that social platforms played in the election has centered on the type of content being shared — none of it has focused on the user. But the user is the one who shares and elevates content — regardless of what it is — and that means the user can also be the first line of defense in a safer network, particularly when it comes to combating erroneous information.

It's no secret by now that most social networks rely on the emotional responses of users to share, and spread, content. But if users were trained to identify fake or incendiary posts, including that which isn't truly news, it might be possible for platforms to break this cycle.

There are some precedents for training users to identify fake news. Anyone with one of the popular email providers has likely scanned his or her spam folder for a confirmation email or response from a friend that was banished to this folder accidentally. That's because each email provider has been trained on the domains and headline formats that most commonly indicate spam to mark it as such before users even see it. Many email providers are even a little bit overzealous in flagging messages as possible pieces of spam. This identification system helps users learn, for example, that emails from wealthy and generous Nigerian princes are probably not to be trusted.

Most of the forums, news sites and platforms that include comment sections also rely on similar machine-learning algorithms to filter obscenities. Usually comments that the algorithm thinks are obscene are held in a queue until a moderator can approve or reject them. Human oversight of the

17

Page 18: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

process is key: Comments for a story on, say, the activist band Pussy Riot, would definitely fall into a moderation queue — incorrectly flagged as an obscenity, with the machine desperately asking, "Is this abusive? Should we keep an eye out for this commenter?" — until the comment moderator fishes out the comments and approves them.

Social networks could incorporate similar machine-learning algorithms that would warn users about dangerous or untrustworthy content. This would be a small deterrent for someone who wants to share false content, of course, but it could provide a helpful flag for another user who doesn't want to be part of the problem. Those of us who are more knowledgeable about how content circulates online take for granted that others don't understand URL structure, domain names or bylines. A simple warning — that labels the story or site as untrustworthy — could be the first line of education for the uninitiated. The second could be a way to report fake news. (Facebook technically has a "report" button, but it is hard to find.) With enough fake news complaints, a URL could be wiped from the system after review from a human moderator.

Yes, this isn't a fail-proof system and many comment platforms and email providers still struggle with this model. But a comprehensive solution to this problem starts with the users and someone to train them. Some have argued that hiring executive editors for social media sites would solve these problems, but when you have one billion active monthly users, it is more important to have a conversation and behavioral expert to reign in the spread of content.

These sites should start with a community manager who can work with both people and product, and not just police the site for fake content.

Facebook, Twitter Users Must Be More Critical of Contentnytimes.com /roomfordebate/2016/11/22/how-to-stop-the-spread-of-fake-news/facebook-twitter-

users-must-be-more-critical-of-content Nicholas A. Glavin is a researcher at the United States Naval War College’s Center on Irregular Warfare and Armed Groups. He is on Twitter (@nickglavin).

The bypassing of traditional media outlets has led to a flooding of the marketplace of ideas, where misinformation, extremist content and state-sponsored disinformation have proliferated.Nefarious actors, like ISIS and neo-Nazis are thriving in the democratization of social media. As users, we need to take more responsibility for the content we read and share.

And as we’ve seen with the Islamic State’s social media operations, nefarious actors are thriving in the democratization of this marketplace. Indeed, according to a report by J.M. Berger for George Washington University’s Program on Extremism, while Facebook and Twitter have in recent years been more aggressive in taking down pro-ISIS content, the Twitter accounts of white nationalists and neo-Nazis have grown by 600 percent since 2012.

Social media companies can’t be the arbiters of what is “true,” but they do have contractually binding terms of service on what is unacceptable and should be removed and which users should be suspended. While Facebook has recently put forward a strategy to police its communities — making it easier for users to report misinformation, and exploring a warning system to label stories that have been flagged as false by third parties— the company could be more aggressive in going after the economics behind fake news.

18

Page 19: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

Governments have a role in policing fake news as well — but that is a double-edged sword. According to Twitter’s transparency report, Turkey and Russia were responsible for nearly 80 percent of government requests to remove content between June 2015 and July 2016. While companies can decline these requests, this is a reminder that governments can shape information for both constructive and destructive political purposes. For example, Turkey has been flagging pro-Kurdish material, whereas Germany has pressured Facebook to target xenophobic, racist content, and abusive statements.This all means that the responsibility falls as much, if not more, on social media users, who must be conscious of or learn to circumvent malign content. The dissemination of fake news and the social media echo chambers reflect the magnetic pull that viral content has on our society. Without the appropriate tools to evaluate information, people risk perpetuating and building isolated beliefs. Just as peer-to-peer social networks online (and in real life) play an integral role in the radicalization process of jihadists, they have a similar impact on how we all formulate our opinions and perceptions of truth.The weaponization of social media has demonstrated

serious social, political and economic repercussions when it comes to the bubbles we build online. The Islamic State is not the only entity or coalition that uses social media to develop, package and deploy narratives that are then used to target and attract individuals. State and non-state actors are using the internet to appeal to individuals' emotion rather than reason.

Hate speech, fake news, extremist content and anything that rises to the top of these online echo chambers are uncomfortable reminders of how democratized the information space has become. Users must tread carefully in this new paradigm, remaining conscious of what drives content on social media and how it manipulates us. Critical thinking has become the front line of defense in navigating this environment.

Social Media Companies Like Facebook Need to Hire Human Editorsnytimes.com /roomfordebate/2016/11/22/how-to-stop-the-spread-of-fake-news/social-media-

companies-like-facebook-need-to-hire-human-editors Cathy O'Neil, a mathematician and data scientist, is the author of "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy." She is on Twitter.

Facebook’s algorithm interprets clicks, likes, and comments as proxies for interest and for quality content. It’s not a stupid idea, but problems quickly arise. High-quality content can certainly rise and spread this way, but people tend to click on links to gossip rags with news of the Kardashians or “like” questionable memes that appeal to their biases.Facebook thus assumes that people have a deep interest in such content, even though the algorithm is actually simply exposing a weakness we wish we didn’t have, much like a dieter who cannot resist readily available M&M’s. Due to these weak proxies and the size and influence of the platform, our Facebook environments eventually get filled with the media equivalent of junk food. At the extreme end, this comes in the form of fake news, which has been created explicitly to manipulate us and thus game the algorithm.There is no purely algorithmic process that can infer truth from lies, and there will always be individuals who game the system for their own gain.

Of course, people have always loved junky news. But in the past, it wasn't presented as equal to real news on a neutral platform.

19

Page 20: Reader, “How We Got To Post-Truth” Texts … · Web viewIn any case, Maryland’s electoral college votes already went to Clinton, a fact pointed out by numerous commenters. (An

So what does Facebook do now? It will need to use actual human judgment if it wishes to address this problem. The human judgment can be supplemented algorithmically, such as with a scaled “white list” of authentic news sources that gets constantly expanded and updated by a team of human editors. It can also be, to some extent, crowdsourced from their users in the form of votes for or against misinformation. But that alone won’t be sufficient, because the same people who gamed the algorithm with fake news can be counted on to game any voting process as well, and with gusto. The truth is, there is no purely algorithmic process that can infer truth from lies, and for Facebook, there will be no escaping its role as a final arbiter — essentially, a news source. The good news is that Facebook has lots of readily available revenue to devote to the problem, and there are plenty of out-of-work journalists.

20