Could algorithms produce libelous news content? Seth C. Lewis of the University of Oregon, Amy Kristin Sanders of Northwestern University in Qatar, and Casey Carmody of the University of Minnesota, state that news organizations must seriously consider legal liability as they develop newswriting bots.
They review the issue in the context of the United States’s First Amendment law and legal cases involving algorithms and libel.
The research points out two key issues to consider:
- determining fault in a case of algorithm-based libel, and
- the inability of news organizations to adopt defenses similar to those used by Google and other providers of algorithmic content.
For public figures as plaintiffs in libel cases, proving that the news algorithm was intended to cause harm could be difficult. If a news organization relied on an algorithm without any additional editorial oversight prior to publication, the case could be different. News organizations should be aware about liability concerning private plaintiffs, who can win a libel case by proving the negligence on the part of the news organization, the authors note.
The US law distinguishes between publishers and distributors. For example, Google has used the Section 230 of the Communications Decency Act to immunize itself from defamation suits arising from automated content. For news organizations, the case would not be the same. Their content, even if created algorithmically, would probably be evaluated as though it were produced through a traditional editorial process.
As the US law provides very good protection for libelious speech, news organizations in other countries are likely to face even stiffer penalties and greater regulations in situations in which automated journalism violates the law, the researchers conclude.
The article “Libel by Algorithm?” was published in Journalism & Mass Communication Quarterly and is available on the publisher’s website (free abstract).
Picture: untitled by qimono, license CC0 1.0, modified