On the Utility of Lay Summaries and AI Safety Disclosures: Toward Robust, Open Research Oversight

Allen Schmaltz


Abstract
In this position paper, we propose that the community consider encouraging researchers to include two riders, a “Lay Summary” and an “AI Safety Disclosure”, as part of future NLP papers published in ACL forums that present user-facing systems. The goal is to encourage researchers–via a relatively non-intrusive mechanism–to consider the societal implications of technologies carrying (un)known and/or (un)knowable long-term risks, to highlight failure cases, and to provide a mechanism by which the general public (and scientists in other disciplines) can more readily engage in the discussion in an informed manner. This simple proposal requires minimal additional up-front costs for researchers; the lay summary, at least, has significant precedence in the medical literature and other areas of science; and the proposal is aimed to supplement, rather than replace, existing approaches for encouraging researchers to consider the ethical implications of their work, such as those of the Collaborative Institutional Training Initiative (CITI) Program and institutional review boards (IRBs).
Anthology ID:
W18-0801
Volume:
Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing
Month:
June
Year:
2018
Address:
New Orleans, Louisiana, USA
Venues:
EthNLP | NAACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–6
Language:
URL:
https://www.aclweb.org/anthology/W18-0801
DOI:
10.18653/v1/W18-0801
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
http://aclanthology.lst.uni-saarland.de/W18-0801.pdf