Security researchers say Apple’s CSAM tech is easy to fool
Fresh research from one of the world’s most prestigious academic institutions claims the kind of technology Apple is using can easily be fooled.
When the techs don’t work
The research, from Imperial College London, claims scanners for illegal images such as the tech Apple is likely to be using may not work where they are intended to work.
The research claims that proposed algorithms that detect illegal images on devices can be easily fooled with imperceptible changes to images.
They say that companies and governments have proposed using built-in scanners on devices like phones, tablets and laptops to detect illegal images, such as child sexual abuse material (CSAM), but the research raises questions about how well these scanners might work in practice.
Imperial College London research into how robust five similar algorithms performed found that altering an ‘illegal’ image’s unique ‘signature’ on a device meant it would fly under the algorithm’s radar 99.9 per cent of the time.
More pain, no gain
The scientists behind the study claim their results show perceptual hashing based client-side scanning (PH-CSS) algorithms will not consistently detect illegal content on personal devices.
The research also raises serious questions about how effective, and therefore proportional, current plans to tackle illegal material through on-device scanning really are. For the cost of an on-device surveillance, little gain is made.
Senior author Dr Yves-Alexandre de Montjoye, of Imperial’s Department of Computing and Data Science Institute, said: “By simply applying a specifically designed filter mostly imperceptible to the human eye, we misled the algorithm into thinking that two near-identical images were different. Importantly, our algorithm is able to generate a large number of diverse filters, making the development of countermeasures difficult.
“Our findings raise serious questions about the robustness of such invasive approaches.”
These algorithms can be built into devices to scan for illegal material. They sift through a device’s images and compare their signatures with those of known illegal material. Upon finding an image that matches a known illegal image, the device would quietly report this to the company behind the algorithm and, ultimately, law enforcement authorities.
How they tested it
The researchers explain that in order to test the robustness of the algorithms, the researchers used a new class of tests called avoidance detection attacks to see whether applying their filter to simulated ‘illegal’ images would let them slip under the radar of PH-CSS and avoid detection.
They tagged several everyday images as ‘illegal’ and fed them through the algorithms, which were similar to Apple’s proposed systems, and measured whether or not they flagged an image as illegal. They then applied a visually imperceptible filter to the images’ signatures and fed them through again.
After applying a filter, the image looked different to the algorithm 99.9 per cent of the time, despite them looking identical to the human eye.
The researchers say this shows just how easily people with illegal material could fool the surveillance. For this reason, the team have decided to not make their filter-generation software public.
It just doesn’t work
Co-lead author Ana-Maria Cretu, PhD candidate at the Department of Computing, said: “Two images that look alike to us can look completely different to a computer. Our job as scientists is to test whether privacy-preserving algorithms really do what their champions claim they do.
“Our findings suggest that, in its current form, PH-CCS won’t be the magic bullet some hope for.”
Co-lead author Shubham Jain, also a PhD candidate from the Department of Computing added: “This realisation, combined with the privacy concerns attached to such invasive surveillance mechanisms, suggest that even the best PH-CSS proposals today are not ready for deployment.”
You can read the full report here.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Dear reader, this is just to let you know that as an Amazon Associate I earn from qualifying purchases.