|
|
|
A frivolous sensibility (reader, listener, consumer, citizen) has only him-/herself to blame for being a “mark” for marketers, i.e., a sucker for first impressions (e.g., regarding prima facie meaning as valid). “Monetizing attention through advertising” (source A) is just giving you the business. It’s what business does in competitive spheres. Dramatizing valid news and views for the sake of securing audience attention invites the confusion of bound-aries in narrative reliability, compelled by the business environment that thrives on consumption, novelty, and entertainment. (It’s why corporate TV journalism tends to become more tabloid, like Disney’s ABC World News).
Being casual about this might be based in good sense, in which case medial monetizing of attention and fake views lose efficacy—given good sense, based in high literacy about what media are doing.
So, one may have been attentive about what good sense is; and even be habitually deliber-ative about that. Being well begins with concerted cultivation of one’s childhood to become an ongoing cultivation of oneself, e.g., gaining good sense about genuineness, about selfidentical value, and advancing one’s sense of reality through highly evidence-based thinking. The more that casual life is oriented by attentiveness, the less that fakery succeeds. Pretty simple. The more that attentiveness has been cultivated deliberately (e.g., via “higher” education), the more that casual life can feel authentically confident about the example you provide to others.
Network power can powerfully counter oligarchic power. And progressives can use their power to advance community.
Education systems are gradually giving more attention to astute reasoning (in the guise of “critical thinking skills,” which I briefly discussed earlier). This must not be narrowly understood in terms of basically being critical, because critique either serves a preferrable, constructive view or it results in handwringing (or worse: cynicism, if not paternalism).
“There has been a proliferation of efforts to inject training of critical-information skills into primary and secondary schools ”(A). But regarding that relative to information, rather than being oriented by desire to advance constructive understanding (within educational excellence) can be self-defeating. Indeed, “an emphasis on fake [views] might also have the unintended consequence of reducing the perceived credibility of real-[views] outlets” (A). By the way, silly skeptics about climate risks count on naïve realism about certainty in science to undermine the fact that the scientific community is singularly global, allowing for more confidence about reality than any form of apprehension in the evolution of humanity.
Critique is a function of astute reasoning, which is a function of good thinking in being well, which relates importantly to advancing a good life and good community.
“Google, Facebook, and Twitter...use complex statistical models to predict and maximize engagement with content . It should be possible to adjust those models to increase emphasis on quality information” (A). That is a March 2018 comment from the Science review. Now in May, we see activism by internet leaders—self-serving, of course; but great, if that produces systematic attentiveness (Facebook, Twitter) or genuine pretenses of “moral leadership” (Microsoft). The Science review is usefully specific:
The platforms could provide consumers with signals of source quality that could be incorporated into the algorithmic rankings of content. They could minimize the personalization of political information relative to other types of content (reducing the creation of “echo chambers”). Functions that emphasize currently trending content could seek to exclude bot activity from measures of what is trending. More generally, the platforms could curb the automated spread of [views] content by bots and cyborgs (users who automatically share [views] from a set of sources, with or without reading them), although for the foreseeable future, bot producers will likely be able to design effective countermeasures.
I disagree that there is “erosion of long-standing institutional bulwarks against mis-information in the internet age” (A). Rather, such “bulwarks” (standards, protections) have been insufficiently extended to new spheres of media (new kinds, new scales) that led to market development that should comply with professional standards and lawful protections. The European Union is showing leadership here that cowardly Trumpland should emulate.
But a reassuring aspect about the plague of fake views is that the fakery is largely identi-fiable: “The problem may be disproportionately attributable to the activities of a few hundred sites—330 by one conservative estimate” (A). Surely, that number (derived from 2017 and earlier research) doesn’t involve the thousands of fake accounts that have been associated with state actors and that Facebook and Twitter have banned (or are in the diligent process of identifying). And there’s the cybersecurity community that would need to appear to be out of the picture, but likely tracks more fakery than companies can.
I don’t worry about the cybersecurity community violating my privacy by working with Google, Facebook, Twitter, etc.
Inasmuch as regulation is required (apart from pursuit of particular agents of fakery by platforms), regulators must not only ensure impartiality toward users, but be seen publicly to ensure impartiality in defining, imposing, and enforcing regulations that survive fair public comment, in the spirit of deliberative democracy.
“Structural interventions generally raise legitimate concerns about respecting private enterprise and human agency. But....internet oligopolies are already shaping human experience on a global scale. The questions before us are how those immense powers are being—and should be—exercised and how to hold these massive companies to account.
“We must redesign our information ecosystem in the 21st century. This effort must be global in scope, as many countries, some of which have never developed a robust [views] ecosystem, face challenges around fake and real [views] that are more acute than in the United States. More broadly, we must answer a fundamental question: How can we create a [views] ecosystem and culture that values and promotes truth?” (A).
The Science reviewers end by asserting need to “promote interdisciplinary research to reduce the spread of fake [views] and to address the underlying pathologies it has revealed” (A).
“The platforms have attempted each of these steps [listed in above sections] and others ....However, the platforms have not provided enough detail for evaluation by the research community or subjected their findings to peer review, making them problematic for use by policy-makers or the general public....More generally, researchers need to conduct a rigorous, ongoing audit of how the major platforms filter information.
“There are challenges to scientific collaboration from the perspectives of industry and academia. Yet, there is an ethical and social responsibility, transcending market forces, for the platforms to contribute what data they uniquely can to a science of fake [views].” (A)
Next: Section 4: “Stanley Fish’s ‘...Mother...’ article” |