Facebook may be suggesting that people upload their own sensitive photos to Facebook, in order to prevent them being spread on Facebook. Huh?!
Well the idea isn’t dumb, to be honest. The idea would be that if I send an intimate/compromising photo to someone and it occurs to me that this someone might betray my trust by sharing the photo with others without my consent, I can preemptively upload the photo to Facebook. The image would then be analysed and a hash generated to help identify it if and when someone else tries to publish it via Facebook, Messenger or Instagram without my consent. Once the hash is generated, the photo itself would be deleted.
It’s not really addressed whether and how the system will identify photos that have had alterations made, different aspect ratios, borders, added text, images that have been turned into to outright memes or even just been significantly compressed. Considering how well Google Assistant/Photos/Lens and even Bixby (lulz aside) identify stuff in photos, this may be a non issue though.
A more pressing concern is this: If I were to pin three labels on Facebook, #trustworthy wouldn’t make the top 20. How are we going to trust the company with anything, let alone our nudie shots? Could we ever be sure they wouldn’t just retain the photo anyway? Abuse it? Be hacked? Be forced to retain it and share it with a suspect government or intelligence agency?
Who would we trust?
In the Australian pilot project outlined, this happens in conjunction with a government agency. Would you trust the government? Would you trust China’s? Egypt’s? Would you trust mine?
All these concerns aside, I do welcome a technological solution for the technological problem that once it’s out there, it’s out there forever.
Source:
The Facts: Non-Consensual Intimate Image Pilot | Facebook Newsroom