Volume 99
Mehtab Khan
Automated tools used in online speech governance are prone to errors on a large- scale yet widely used. Legal and policy responses have largely focused on case-by- case evaluations of these errors, instead of an examination of the development process of the tools. Moreover, information on the internet is no longer simply generated by users, but also by sophisticated language tools like ChatGPT, that are going to pose a challenge to speech governance. Yet, legal and policy measures have not responded adequately to AI tools becoming more dynamic and impactful. In order to address the challenges posed by algorithmic content governance, I argue that there is a need to frame a regulatory approach that focuses on the tools used in both content moderation and content generation contexts—which can be done by viewing this technology through an algorithmic accountability lens. I provide an overview of the various aspects of the technical and normative features of these tools that help us frame the regulation of these tools as an algorithmic accountability issue. I do this in three steps: First, I discuss the lack of sufficient attention towards AI tools in current regulatory approaches. Second, I highlight the shared features of both content moderation and content generation to offer insights about the interlinked and evolving landscape of online speech and AI Governance. Third, I situate this discussion of speech governance within a broader framework of algorithmic accountability to guide future regulatory interventions.
Full article available here.