The insight driving CaliberAI is that this universe is a bounded infinity. While AI moderation is nowhere close to being able to decisively rule on truth and falsity, it should be able to identify the subset of statements that could even potentially be defamatory.
Carl Vogel, a professor of computational linguistics at Trinity College Dublin, has helped CaliberAI build its model. He has a working formula for statements highly likely to be defamatory: They must implicitly or explicitly name an individual or group; present a claim as fact; and use some sort of taboo language or idea—like suggestions of theft, drunkenness, or other kinds of impropriety. If you feed a machine-learning algorithm a large enough sample of text, it will detect patterns and associations among negative words based on the company they keep. That will allow it to make intelligent guesses about which terms, if used about a specific group or person, place a piece of content into the defamation danger zone.
Logically enough, there was no data set of defamatory material sitting out there for CaliberAI to use, because publishers work very hard to avoid putting that stuff into the world. So the company built its own. Conor Brady started by drawing on his long experience in journalism to generate a list of defamatory statements. “We thought about all the nasty things that could be said about any person and we chopped, diced, and mixed them until we’d kind of run the whole gamut of human frailty,” he…
visibility_offDisable flashes
titleMark headings
settingsBackground Color
zoom_outZoom out
zoom_inZoom in
remove_circle_outlineDecrease font
add_circle_outlineIncrease font
spellcheckReadable font
brightness_highBright contrast
brightness_lowDark contrast
format_underlinedUnderline links
font_downloadMark links