A nightmare few weeks for Apple when it comes to iMessage has suddenly become much worse. Confusion reigns about whether major security issues have been fixed—is your iMessage safe or not? And now Apple has just confirmed the most “shocking” and controversial update in the platform’s history. It really might be time to quit.
Remember earlier this year, when WhatsApp followed its privacy label debacle with a worse PR disaster, forcing a change of terms on its 2 billion users. Well, iMessage has just done something similar. Three weeks on from Pegasus—a zero-click attack on iMessage users, Apple has just confirmed that on-device machine learning will soon screen iMessage image attachments to “determine if a photo is sexually explicit.”
Yesterday we were gradually headed towards a future where less and less of our information had to be under the control and review of anyone but ourselves. For the first time since the 1990s we were taking our privacy back. Today we’re on a different path.— Matthew Green (@matthew_d_green) August 5, 2021
Apple’s new update is designed to flag sexual images when being sent to or from minors on iMessage. Depending on the child’s age, the technology will either warn the child that parents will be informed or simply caution the child to take care. Despite the virtuous use case, the encrypted lockbox will have been breached.
“Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad,” EFF warns, “but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security.”
Apple is also launching an on-device screener for photos that users send to iCloud, hashing images to check against content flagged by law enforcement. “We want to help protect children from predators who use communication tools to recruit and exploit them,” Apple says, “and limit the spread of Child Sexual Abuse Material.” This is much less controversial—online photo services already screen content for CSAM.
“The initial potential concern is that this new technology could drive CSAM further underground,” warns ESET’s Jake Moore, “but at least it is likely to catch those at the early stages of their offending. The secondary concern, however, is that it highlights the power in which Apple holds with the ability to read what is on devices and match any images to those known on a database. This intrusion is growing with intensity and often packaged in a way that is for the greater good.”
We all want to see technology deployed to tackle abuse, and I have suggested that Facebook reverse plans to encrypt Messenger for this reason, but breaking existing end-to-end encryption is simply that. Screening iCloud Photos on your iPhone is one thing, but adding client-side screening of any kind to iMessage on your iPhone is quite another. And the consequences of this first move need to be understood.
And so, yet again, we are left with the sinking feeling that nothing is as it should be. Fears that WhatsApp might share our data with Facebook were bad, albeit predictable. But Apple using technology to snoop on our private, seemingly encrypted iMessages? Apple being the first to introduce client-side content analysis on a flagship, end-to-end encrypted messenger. No-one saw that coming.
Louder, for the people in the back: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. https://t.co/vRHRTxH0I8— Eva (@evacide) August 5, 2021
Apple’s timing is dreadful. Pegasus raised two serious concerns—that Apple’s ecosystem, including iMessage, has dangerous vulnerabilities, and that Apple’s opaque communications and “black box” security made for a very unhealthy mix. Now we can add a third—what happens on your iPhone no longer stays on your iPhone.
When WhatsApp was hit with its own Pegasusgate in 2019, a company spokesperson told the media that “engineers had worked around the clock in San Francisco and London to close the vulnerability,” and that all users should upgrade to install the urgent fix “to protect against potential targeted exploits.”
Contrast that with Apple this time: “Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers.”
This lack of clarity has shocked many iPhone users. “There were indications [with iOS 14.7.1] that it might have been fixed, but we don’t know” STC’s Kate O’Flaherty warns in this week’s video. “Even the best security researchers can’t say… The real problem is Apple’s complete lack of transparency. Everyone wants this to be fixed, but we don’t know if it’s fixed. And if it isn’t fixed, why isn’t it fixed? What’s going on?”
So, what is going on?
First, there are the reports that Pegasus exploited iMessage, iCloud Photo Streaming and Apple Music, that the architecture that stitches these in-house apps and services together has security holes. Remember, the attack likely came from an iCloud account. All of which is made much worse by Apple’s “black box,” making it difficult for third-party researchers and software to investigate and then block attacks.
One could speculate that a reason Apple has been reluctant to publicly claim or even hint that it has blocked the Pegasus exploit is that NSO may just pull another one from its shelf, threading another needle through Apple’s OS, causing embarrassment when it comes to light. And that suggests Apple needs a serious architecture rethink.
Second, this exposes an alarming risk with iMessage, that this secure messenger provides a tunnel through to your iPhone, through which bad actors with an iCloud account can push malicious exploits that your device then silently processes.
And this brings us to the risk from “unknown senders.” The ubiquity of SMS means that, broadly speaking, you can text any phone from your own. WhatsApp, iMessage and others replicate that ubiquity, but they also open your phone to serious new risks. Automatically downloading images that might be hiding malicious code, processing other attachments, or back-end processing where vulnerabilities might be exploited.
There is an irony here, given that Pegasus was likely an “unknown sender” attack over iMessage. Apple had seemed to be addressing this risk with its “filter unknown senders” option. Apple says that this “turns off iMessage notifications from senders who aren’t in your contacts,” that “you won’t receive notifications for these messages.”
Filter Unknown SettingsApple Support
On the surface this seems like a very good move. Clearly not at the same level as Signal, whose Message Request feature does not allow an unknown sender to message a user until that user has accepted the contact, but a definite step in the right direction.
One might expect that enabling this filtering triggers additional protections in iMessage. This should be “a step forward in terms of mobile security and prevent a wide range of attacks,” Check Point’s Yaniv Balmas told me. It should be “another layer of defense and raise the bar even higher for exploit developers.”
Except it doesn’t work as billed, which seems especially awkward given suggestions in Pegasus media reports that the filter might offer some defense, and the near certainty that an “unknown sender” perpetrated the attacks.
I tested the filter with normal iPhone to iPhone messages from non-contacts and the push notifications, previews, attachments and metadata all came through. All that changed is the separation into two lists, known and unknown, optics. Researcher Tommy Mysk ran the same tests from iCloud to iPhone and found the same. It seems that it worked originally, but may have changed with the launch of iOS 14.
And so, one can assume there is way less going on behind the scenes with unknown senders than we might have hoped. “I think if the option provided any line of defense [against Pegasus],” Mysk told me, “Apple would have enabled it by default.”
Filter Unknown Senders Failure@UKZak
Apple didn’t respond to me when I flagged this to them, nor did it answer my questions on other back-end protections against unknown sender messages, or whether the Pegasus vulnerability had been patched.
Let’s be very clear here, iMessage, WhatsApp and all other platforms should not allow anyone to message you unless and until you have accepted that contact. At the very least the message should be very carefully sandboxed until it’s been checked, or you have accredited the contact. Convenience versus privacy and security again.
Signal Message RequestsSignal
Meanwhile, there are vulnerabilities in iMessage’s architecture, protective features don’t seem to work as expected, and Apple is not being forthcoming on the impact to users and how those users can stay safe—other than rebooting their phones weekly.
And on top of that we now have that on-device iMessage screening for sexual imagery to further confuse and alarm users. iMessage is about to change its spots.
“It’s a real bullet meet foot moment for Apple,” according to my STC colleague Davey Winder. “I’m not quite sure how [Apple] thought this would play out with users. Apart from anything else, the whole ML side of it raises huge red flags.”
“All screening the masses does is damage the good people and drive the bad people further underground,” data privacy expert Emily Overton has warned, highlighting the acute risk that sensitive, private information might be inadvertently exposed.
There are mountains of research showing that filters for "sexually explicit" content overflag LGBTQ+ content. This "feature" is going to out a lot of queer kids to their homophobic parents. https://t.co/zRCrLNdGss— Eva (@evacide) August 5, 2021
Davey is absolutely right on the ML side of things. There are clear anomalies, such as how will it avoid known risks around flagging images wrongly. And while the iMessage filter might stay on your iPhone for now, it’s a short walk to screening images against watchlists, with human verification and off-device notifications. All of which is a hugely retrograde step for privacy. But there’s actually an even bigger risk lurking here.
Remember the controversies around reports that Apple declined to encrypt iCloud to appease U.S. law enforcement? Or reports on iCloud hosting in China to appease the authorities in that critical Apple market? Can we trust that Apple won’t be badgered into dangerous compromises, into adhering to local laws? And now Apple will have technology that could be adapted to search and flag client-side content for anything.
I feel it’s also worth reminding people of something: Apple operates the only remaining *large-scale* E2EE encrypted messaging service in China, in iMessage.— Matthew Green (@matthew_d_green) August 5, 2021
“Unfortunately,” says Cyjax CISO Ian Thornton-Trump, “because the due rigor of global law enforcement may not be to western standards, I don’t think this should be deployed to user accounts without a warrant at a minimum. And in some countries, where there is a likelihood of the capability being abused, to hunt for pictures of government critics, for instance, access to this feature should be restricted altogether.”
Right now, this is all about child safety, but what about dissenters and protest groups, alleged criminals? What happens when Apple is challenged by law enforcement in the U.S. or Europe or China to expand what it looks for? It will not be able to offer a “technically impossible” defense any longer, that rubicon will have been crossed.
As EFF explains, “it would now be possible for Apple to add new training data to the classifier sent to users’ devices or send notifications to a wider audience, easily censoring and chilling speech.”
“Apple sells iPhones without FaceTime in Saudi Arabia, because local regulation prohibits encrypted phone calls,” says privacy researcher Dr. Nadim Kobeissi, writing for the “open letter” against Apple’s latest moves. “That’s just one example of many where Apple’s bent to local pressure. What happens when local regulations in Saudi Arabia mandate that messages be scanned not for child sexual abuse, but for homosexuality or for offenses against the monarchy?”
Apple has yet to respond fully to the controversy it has stirred up, and the company did not respond to my questions ahead of publishing. The best we have so far is the reported internal memo, that lauds the move despite the criticism. “We know some people have misunderstandings,” wrote Apple VP Sebastien Marineau-Mes, “and more than a few are worried about the implications, but we will continue to explain and detail the features so people understand what we’ve built.”
End-to-end-encrypted messaging is controversial, it has become a central battlefield between hawkish lawmakers and security agencies on one side, and big tech and the privacy lobby on the other. Once a backdoor is introduced, the fear is that it will soon spiral out of control. And Apple appears to have just blinked first. The repercussions will not be known for some time, but this is definitely a pivotal moment.
We will cover this issue in much more detail on STC next week, looking into the iMessage and Photos updates and the implications for Apple’s billion-plus iPhone users. If you have any particular questions or concerns, please let us know.
Meanwhile, iMessage users are left worried that serious vulnerabilities may or may not have been fixed, that Apple is about to break its own model and screen the private iMessages on their iPhones, and that this risks introducing even more vulnerabilities into Apple’s “black box” ecosystem. Maybe it’s time…
iMessage On/OffApple iOS 14