Some of the most talked about ethical issues in technology, even as 2021 is just getting started, are the debates over online content moderation, the role of social media in our public discourse, and the merits of Section 230 of the 1996 Communications Decency Act.
2020 was a year that not only challenged the fortitude of our families but also the fabric of our nation. Last year we saw many complex ethical issues arise from our use of technology in society and as individuals. From the debates over the proper use of social media in society to the adoption of invasive technologies like facial recognition that pushed the bounds of our concepts of personal privacy, many of the ethical challenges exposed in 2020 will flow into 2021 as our society debates how to respond to these developments and how to pursue the common good together as a very diverse community.
Here are three areas of ethical concern with technology that we will need to watch for if we hope to navigate 2021 well.
Content Moderation and Section 230
Some of the most talked about ethical issues in technology, even as 2021 is just getting started, are the debates over online content moderation, the role of social media in our public discourse, and the merits of Section 230 of the 1996 Communications Decency Act. If you are unfamiliar with Section 230 and the debates surrounding the statute, it essentially functions as legal protection for online platforms and companies so they are not liable for the information posted to their platforms by third party users.
In exchange for these protections, internet companies and platforms are to enact “good faith” protections and are encouraged to remove content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” But what exactly does “good faith” and “otherwise objectionable” mean in this context of the raging debates over the role of social media today?
This question is at the heart of the debate over Section 230’s usefulness today. Some argue that platforms like Facebook, Google, Twitter, and others must do more to combat the spread of misinformation, disinformation, and fake news online. As platforms have engaged in labeling misleading content and removing posts that violate their community policies, many argue that these companies simply aren’t doing enough.
But on the other side of the aisle, some argue that these 230 protections are being used as a cover to censor certain content online—often in a partisan manner, being inconsistently applied (especially on the international stage), and may amount to violations of users’ free speech. They argue that 230 must be repealed or modified substantially in order to combat bias against certain types of political, social, or religious views.
As technology policy expert and ERLC Research Fellow Klon Kitchen aptly states, “All of these perspectives are enabled by vagaries surrounding the text of the law, the intent behind it, and the relative values and risks posed by large Internet platforms.” Regardless of where one lands in this debate, we will likely see inflamed conversations over this statute and the extent to which it should be maintained if at all.