Censorship

Beyond Platforms: Private Censorship, Parler, and the Stack

Beyond Platforms: Private Censorship, Parler, and the
Stack 1


By Jillian C. York,
Corynne
McSherry
, and Danny
O’Brien

Last week, following riots that saw supporters of President
Trump breach and sack parts of the Capitol building, Facebook and
Twitter
made the decision
to give the president the boot. That was
notable enough, given that both companies had previously treated
the president, like other political leaders, as largely exempt from
content moderation rules. Many of the president’s followers
responded by moving to Parler. This week, the response has taken a
new turn. Infrastructure companies much closer to the bottom of the
technical “stack” including Amazon
Web Services (AWS), and Google’s Android and Apple’s iOS app
storesdeciding to cut off service not
just to an individual but to an entire platform.� Parler has so far
struggled to return online, partly through errors of its own
making, but also because the lower down the technical stack, the
harder it is to find alternatives, or re-implement what
capabilities the Internet has taken for granted.

Whatever you think of Parler, these decisions should give you
pause. Private companies
have strong legal rights
 under U.S. law to refuse to host or
support speech they don’t like. But that refusal carries
different risks when a group of companies comes together to ensure
that certain speech or speakers are effectively taken offline
altogether.

The Free Speech Stack—aka “Free Speech Chokepointsâ€

To see the implications of censorship choices by deeper stack
companies, let’s back up for a minute. As researcher
Joan Donovan
puts it,“At every level of the tech stack,
corporations are placed in positions to make value judgments
regarding the legitimacy of content, including who should have
access, and when and how.†And the decisions made by companies at
varying layers of the stack are bound to have different impacts on
free expression.

At the top of the stack are services like Facebook, Reddit, or
Twitter, platforms whose decisions about who to serve (or what to
allow) are comparatively visible, though still far too opaque to
most users.  Their responses can be comparatively targeted to
specific users and content and, most importantly, do not cut off as
many alternatives. For instance, a discussion forum lies close to
the top of the stack: if you are booted from such a platform, there
are other venues in which you can exercise your speech. These are
the sites and services that all users (both content creators and
content consumers) interact with most directly. They are also the
places where people think of when they think of the content
(i.e.“I saw it on Facebookâ€). Users are often required to have
individual accounts or advantaged if they do. Users may also
specifically seek out the sites for their content. The closer to
the user end, the more likely it is that sites will have more
developed and apparent curatorial and editorial policies and
practices—their “signature styles.â€
And users typically have an avenue, flawed as it may be, to
communicate directly with the service.

At the other end of the stack are internet service providers
(ISPs), like Comcast or AT&T. Decisions made by companies at
this layer of the stack to remove content or users raise greater
concerns for free expression, especially when there are few if any
competitors. For example, it would be very concerning if the only
broadband provider in your area cut you off because they didn’t
like what you said online—or what someone
else whose name is on the account said. The adage “if you don’t
like the rules, go elsewhere†doesn’t work when there is
nowhere else to go.

Become an Activist Post Patron for just $1 per month at Patreon.

In between are a wide array of intermediaries, such as upstream
hosts like AWS, domain name registrars, certificate authorities
(such as Let’s Encrypt),

content delivery network
s (CDNs), payment processors, and email
services. EFF has a handy chart of some of those key links between
speakers and their audience here. These
intermediaries provide the infrastructure for speech and commerce,
but many have only the most tangential relationship to their users.
Faced with a complaint, takedown will be much easier and cheaper
than a nuanced analysis of a given user’s speech, much less the
speech that might be hosted by a company that is a user of their
services. So these service are more likely to simply cut a user or
platform off than do a deeper review. Moreover, in many cases both
speakers and audiences will not be aware of the identities of these
services and, even if they do, have no independent relationship
with them. These services are thus not commonly associated with the
speech that passes through them and have no “signature styleâ€
to enforce.

Infrastructure Takedowns Are Equally If Not More Likely to Silence
Marginalized Voices

We saw a particularly egregious example of an infrastructure
takedown just a few months ago, when
Zoom made the decision to block a San Francisco State University
online academic event
featuring prominent activists from Black
and South African liberation movements, the advocacy group Jewish
Voice for Peace, and controversial figure Leila Khaled—inspiring
Facebook and YouTube to follow suit. The decision, which Zoom
justified on the basis of Khaled’s alleged ties to a
U.S.-designated foreign terrorist organization, was apparently made
following external pressure.

Although we have numerous
concerns
with the manner in which social media platforms like
Facebook, YouTube, and Twitter make decisions about speech, we
viewed Zoom’s decision differently. Companies like Facebook and
YouTube, for good or ill, include content moderation as part of the
service they provide. Since the beginning of the pandemic in
particular, however, Zoom has been used around the world more like
a phone company than a platform. And just as you don’t expect
your phone company to start making decisions about who you can
call, you don’t expect your conferencing service to start making
decisions about who can join your meeting.

It is precisely this reason that Amazon’s ad-hoc decision to
cut off hosting to social media alternative Parler, in the face of
public pressure, should be of concern to anyone worried about how
decisions about speech are made in the long run. In some ways, the
ejection of Parler is neither a novel, nor a surprising
development. Firstly, it is by no means the first instance of
moderation at this level of the stack. Prior examples include
Amazon denying service to
WikiLeaks
and
the entire nation of Iran
. Secondly, the domestic pressure on
companies like Amazon to disentangle themselves from Parler was
intense, and for good reason. After all, in the days leading up to
its removal by Amazon, Parler played host to outrageously violent
threats against elected politicians from its verified users,
including
lawyer L. Lin Wood
.

But infrastructure takedowns nonetheless represent a significant
departure from the expectations of most users. First, they are
cumulative, since all speech on the Internet relies upon multiple
infrastructure hosts.  If users have to worry about satisfying not
only their host’s terms and conditions but also those of every
service in the chain from speaker to audience—even though the
actual speaker may not even be aware of all of those services or
where they draw the line between hateful and non-hateful
speech—many users will simply avoid sharing controversial
opinions altogether. They are also less precise. In the past,
we’ve seen entire
large websites
darkened by upstream hosts because of a
complaint about a single document posted. More broadly,
infrastructure level takedowns move us further toward a thoroughly
locked-down, highly monitored web, from which a speaker can be
effectively ejected at any time.

Going forward, we are likely to see more cases that look like
Zoom’s censorship of an academic panel than we are Amazon cutting
off another Parler. Nevertheless, Amazon’s decision highlights
core questions of our time: Who should decide what is acceptable
speech, and to what degree should companies at the infrastructure
layer play a role in censorship?

At EFF, we think the answer is both simple and challenging:
wherever possible, users should decide for themselves, and
companies at the infrastructure layer should stay well out of it.
The firmest, most consistent, approach infrastructure chokepoints
can take is to simply refuse to be chokepoints at all. They should
act to defend their role as a conduit, rather than a publisher.
Just as law and custom developed a norm that we might sue a
publisher for defamation, but not the owner of the building the
publisher occupies, we are slowly developing norms about
responsibility for content online. Companies like Zoom and Amazon
have an opportunity to shape those norms—for the better or for
the worse.

Internet Policy and Practice Should Be User-Driven, Not
Crisis-Driven

It’s easy to say today, in a moment of crisis, that a service
like Parler should be shunned. After all, people are using it to
organize attacks on the U.S. Capitol and on Congressional leaders,
with an expressed goal to undermine the democratic process. But
when the crisis has passed, pressure on basic infrastructure, as a
tactic, will be re-used, inevitably, against unjustly marginalized
speakers and forums. This is not a slippery slope, nor a tentative
prediction—we have already seen this happen to groups and
communities that have far less power and resources than the
President of the United States and the backers of his cause. And
this facility for broad censorship will not be lost on foreign
governments who wish to silence legitimate dissent either. Now that
the world has been reminded that infrastructure can be commandeered
to make decisions to control speech, calls for it will increase:
and principled objections may fall to the wayside.

Over the coming weeks, we can expect to see more decisions like
these from companies at all layers of the stack. Just today,

Facebook removed members of the Ugandan government
in advance
of Tuesday’s elections in the country, out of concerns for
election manipulation. Some of the decisions that these companies
make may be well-researched, while others will undoubtedly come as
the result of external pressure and at the expense of marginalized
groups.

The core problem remains: regardless of whether we agree with an
individual decision, these decisions overall have not and will not
be made democratically and in line with the requirements of
transparency and due process, and instead are made by a handful of
individuals, in a handful of companies, most distanced and least
visible to the most Internet users. Whether you agree with those
decisions or not, you will not be a part of them, nor be privy to
their considerations. And unless we dismantle the increasingly
centralized chokepoints in our global digital infrastructure, we
can anticipate an escalating political battle between political
factions and nation states to seize control of their powers.

Source:
EFF.org

Also See: Blockchain
Social Networks: Alternatives to Facebook, Twitter, YouTube and
Medium


Beyond Platforms: Private Censorship, Parler, and the Stack

Read the Full Article

Censorship
Arizona Cardinals Stadium to Be Coronavirus Vaccination Site
U.S. Sanctions 2020 Election Meddlers Tied to Giuliani

You might also like
Menu