Google has publicized two main ways it deals with child abuse images on its systems, but it doesn’t talk much about how it detects explicit animated material. (Photo by Chris Jackson/Getty Images)
Over the last two decades, tech giants have had to deal with an ever-growing deluge of videos and images of child sexual abuse on their platforms. As Apple recently found out, it’s a difficult problem to solve, where scanning people’s devices and online accounts for illegal content can lead to concerns about privacy.
But it isn’t just explicit photos and videos of underage children that Silicon Valley’s biggest companies are trying to find and erase from their servers. They’re also looking for cartoons depicting graphic acts involving children, as revealed by a recent search warrant asking Google to provide information on a suspect who allegedly owned such animations.
That kind of content is potentially illegal to own under U.S. law and can be detected by Google’s anti-child sexual material (CSAM) systems, a fact not previously discussed in the public domain, the warrant reveals. Google has long acknowledged that its code can look out for child abuse using two technologies. The first uses YouTube-designed software that looks out for “hashes” of previously-known illegal content. Such hashes are alphanumeric representations of a file, meaning a computer can scan files within, for instance, a Gmail email and it’ll raise a flag if there’s a…