In 2014, when the NY Times was launching the Innovation report, a report on the organization’s future in the digitized era, many applauded the vision and the challenging spirit of traditional values of the liberal newspaper. The document, which initially wanted to be an internal report, was issued, establishing a frame of reflection for a wide audience of media consumers.
Among other conclusions, which make Innovation a valuable document in the post-modernist digitized media era, one of the editors brought up the reality that was to be confirmed regarding life in the last two and a half years: “Our reflex was first to improve and afterward to publish. That’s how things have always been in journalism. However, we have to ask ourselves if that’s the case today. (…) Beyond our journalistic values, though, we can adopt the model of ‘the minimally functional product’ (…) because then we can receive feedback and improve it as we go on.”
It was one of the fundamental paradigm shifts in media that the NY Times was saying: that it was mandatory for the newspaper to become a preponderantly digital organization — the first of its kind.
The shift toward the digital, the increase in readers on mobile devices, content delivery through digital platforms: all of these demanded speed and adaptability, to the detriment of every journalism college class – use reactions instead of three-source verifying!
Popular news validation, through thousands of rushed clicks, would slowly replace the professional routines of old-school journalism. The truth was that referendum was replacing formal binary logic.
Fake news is born!
If a large organization such as the NY Times, was bringing up the acute problem of abandoning the oldest form of information management, from March 2014 onward, we can all imagine how things work for the A Microsoft study applied to the Canadian media market shows that the maximum focus time span of an average user — for any given text for that matter — in the digital realm, is eight seconds. An article head and a one-sentence lead? A photo and a few keywords? It’s clear that nobody has time and patience today, and maybe no one will focus much on topics that are of no immediate interest.
A research study at the Stanford Graduate School of Education, published in November 2016, reached a worrying conclusion: American students — of course, not only them — have serious issues in reasoning, on the basis of the reliability of the information that they access online, and in distinguishing between fact and opinion, advertising and editorial content, or social and political texts.
Although formed from persons with a remarkable dexterity in the online realm, Generation Y fails, most times, to make simple judgments on the information they receive –a perfect vehicle for manipulation for the fake-news market. And if millennials have difficulties in distinguishing actual information, other categories of online readers only become more and more receptive to stories without value.
The fake-news problem has skyrocketed in the last three months of the U.S. presidential election. Obscure websites, with no legitimacy nor history, have generated as much credence from internet users as respectable media institutions. The polemic did not entail only the influences of the two main candidates, but the whole of American society. Thus, politicians, mass-media outlets, business and information communities, and NGOs were seeking answers to a problem that appeared suddenly. How is it possible that lies, detectable merely using common sense, become believable and shared multiple times enthusiastically?
Clues for an answer can be found in recent events.
In the summer of 2014, an American comedian posted on Facebook a photograph of Steven Spielberg on the filming platform of the famous movie “Jurassic Park”, resting and smiling on one of the dinosaurs used during filming. The text accompanying the photo described the “savagery” of the moment: “Hunter for pleasure posts beside the triceratops that he butchered.”
In the 21st century, there are people who believe that dinosaurs were at one point contemporary with humans. In the 21st century, there are, among us, people who believe that dinosaurs still live, that the Earth is flat and that a day has 20 hours. And when these people gain access to the Internet, to the indiscriminate possibility of making content viral on global networks, truth becomes a secondary persona in a play without a scenario, with tilted lights. In such a world, any reasonable social actor loses his or her credibility. The public decides what is good, useful, and verisimilar, accepting slices of post-truth.
Every year, Oxford Dictionaries picks the word of the year. For 2016, it is no wonder that Post-Truth ranked first, and what should concern us is the associated definition – “(…) that which refers to the circumstances in which objective facts are less influential in shaping public opinion than emotions and personal beliefs.”
In the time of certainty “on the web,” truth loses influence!
According to a 2016 research, of the Pew Research Center, we find out that 62% of Americans get their information on social media platforms, and 44% of them keep in touch with what’s new in the world via Facebook. The growing influence of this social platform, which does not publish its own original content, has made many to quote it as a main source of information. “I read that on Facebook today” has become a mark of news consumerism and of disinterest, a mark that is growing more significant than the primary sources of information than attribution, and, what’s worse, than the critical spirit that should accompany any information.
Conditions thus favor the viewing of the most implausible pieces of news, obscurely produced in storage-like offices, interested in financial gain or of another nature. The replication of content on the network represents an excellent source of revenue and “recognition” as disinformation champions – a per-pound selling, or per piece, of dinosaur illusions, in groups selected through an algorithm that confirms their own prejudices everywhere. Although it does not admit it explicitly, Facebook has the means to bring to light, on the users’ screens, only news that makes them happy, news that reinforces their belonging to a community, in order to motivate interaction.
The same Pew research shows that one in ten Americans uses another social network to verify the received information. In this case, disinformation makes all the money. And it contributed to making group truths viral, available and appreciated within the Internet community in which the user finds his values reflected and appreciated.
Facebook goes under pressure – as well as other social media platforms – to find and implement those measures that would lead to blocking fake news. Media, social, political, public entities, as well as publishers, demand a starker implication from the tech giant to ameliorate the situation. Facebook consistently refuses to consider themselves publishers, preferring to attribute to itself the platform-of-expression tag, based on quality in chances for media entities and people. At the same time, Facebook rejects the idea of giving up its news feed, or at least to giving up on the term “news”.
The truth of the matter is that it is very difficult, sometimes even for the trained eye, to detect fake news with mathematical precision. Besides obvious cases, which can be signaled by users, there are many nuances and different contexts in published pieces of news which do not permit a precise categorization. Different variables – e.g. cultural, linguistic, anthropological, satire, double language, historic contextualization, demographic, and racial variables — offer so many impediments to separating the millions of informational network inputs.
Between rumor, gossip, and pure manipulation, there is an area that cannot be regulated so easily. It does not even allow clear separation of facts from speculation, of lies from subjective interpretation. For example, even in 2017, we haven’t decided yet if vaccination is good or bad.
The great research schools are already preoccupied with creating software for interpreting text semantics, lexicons of subjectivity, and algorithms for processing texts through cross-over verification, which would pull up red flags with concern for texts that circulate on the Internet.
Bing Liu, an American researcher specialized in natural-language processing, published in 2015 his report “The Analysis of Feelings,” a research of the way in which one can identify malicious intentions within a text, beyond precautions taken by the author. The classification of subjectivity depends on many variables because the author admits that, at this moment, nobody dares to try out a solution that is 100% automatic. This may only possibly narrow the search area.
Only an educated person, with the required critical toolset at hand and open-mindedness to accept divergent points of view, can detect non-truth, insinuation or tendency within a text, and can understand the blend of states within a sentiment/feeling. Just like life!
*Mihaela Nicola is a member of the LARICS experts Council.