This article is from the source 'guardian' and was first published or seen
on .
It last changed over 40 days ago and won't be checked again for changes.
WhatsApp vulnerability explained: by the man who discovered it
Removed: article
(6 months later)
There was an outcry when the Guardian published my information regarding a vulnerability within WhatsApp’s implementation of end-to-end encryption, but much of the response misses the point.
This opinion article was removed because of extensive amendments to the news article on which it commented.
Most of the arguments seem to revolve around what is and isn’t a backdoor. You can argue that we are looking at a vulnerability which would be something that is there by error, or a backdoor, which would be something that is there deliberately.
At the time I found the flaw, I didn’t think it was deliberate, but since Facebook was informed in April 2016 and it still hasn’t been fixed, now I’m not so sure. But this discussion is a smokescreen for the real problem.
Facebook does not deny that there is a vulnerability that can be used to “wiretap” targeted conversations by, for example, governments with access to WhatsApp’s servers. And despite WhatsApp’s recent public statements, the vulnerability cannot be avoided by verifying fingerprints or checking a checkbox in the WhatsApp settings.
The vulnerability in a nutshell
In a simplified manner, encrypted messaging works using secret and public keys. Every user has both a secret key known only to them, and a public key.
A user’s public key can be used to encrypt messages which can then only be made readable again with the associated secret key. A difficult problem in secure communication is getting your friend’s public keys. Apps such as WhatsApp and Signal make the process of getting those keys easy for you by storing them on their central servers and allowing your app to download the public keys of your contacts automatically.
The problem here is that the WhatsApp server could potentially lie about the public keys. Instead of giving you your friend’s key, it could give you a public key belonging to a third party, such as the government. That’s why, if you don’t trust WhatsApp, you would need to verify what they call the “security” code with your friends. This way you would be able to make sure the WhatsApp server really did give you your friend’s key.
Things get more complicated when you look at what happens if your friend changes his public key, for example, because he gets a new phone or reinstalls WhatsApp. Here the WhatsApp server gives you new public keys for your contacts.
You should be notified when sent a friend’s new public key, and given the option to validate again that this new key indeed belongs to your friend and not some other party. This behaviour is called “blocking”. The problem with WhatsApp is that you are not given this option.
Instead, your WhatsApp will automatically accept this new key and resend all “in transit” messages (those marked with only one tick), encrypted with the new, potentially malicious key. This behaviour is called “non-blocking”.
It does not sound too bad because it only affects “in transit” messages, but it is at the discretion of the WhatsApp server to decide which messages are “in transit” and which are not by passing the “delivered” message back to the sender. Furthermore, WhatsApp voice calls are also affected: when you call someone and during the time the call connects you receive a new key, your phone will just switch to this new key without alerting you.
There is an optional setting in WhatsApp called “show security notifications”. With this setting enabled, your phone will display you a warning when it receives new keys, but only after those “in transit” messages are already re-sent or you hang up the voice call.
The user experience ‘downgrade’ we are talking about
WhatsApp argues that this vulnerability is a “design decision” that increases usability by making sure messages are resent automatically without the need for the user to click a yes or no button. This is contested, but even if you believed that it would increase usability, that argument works only for messages, not for voice calls. For voice calls, for instance, if the recipient is offline then the call can’t be picked up and you have to call again later.
Signal chooses to handle key changes with blocking and so does not have this vulnerability, but WhatsApp chooses to go with non-blocking and therefore has it. So how are they different? How more difficult is Signal to use?
Imagine you dump your phone into the ocean and only a month later you get a new phone and reinstall WhatsApp, changing your security key. During the month some friends might have sent you messages that stayed as undelivered.
Using WhatsApp, your friend’s phones are instructed to automatically re-encrypt and retransmit any messages that haven’t been delivered. But they don’t know if they are sending messages to you or the government. Then, and only if your friends specifically asked WhatsApp to do so, they will see a warning after delivering the messages that there could’ve been something shady going on. Signal on the other hand will tell your friends something like: “There might’ve been something shady going on. Do you want to resend your message?”
But how often do those situations really occur? I’d say not that often. WhatsApp says “millions of messages”, which is actually not such a big number considering users send and receive something in the region of 15tn (that’s trillion) messages per year through its servers. Even if it does occur the messages aren’t lost if you use blocking: your contacts are simply asked to send the message again.
The other big question is whether it is really that much to ask for from the users to use blocking?
With the “show security features” enabled, a user is basically telling WhatsApp: “I’m especially concerned about my privacy and I know what I am doing. Please give me the best privacy possible”.
However, even with this setting enabled, WhatsApp will still automatically re-encrypt and retransmit messages, leaving the sender vulnerable, only notifying the user of the key change after the fact. If someone is concerned enough to have the setting switched on surely WhatsApp should switch to blocking?
In a blog post in defence of WhatsApp, one of the creators of the Signal end-to-end encryption protocol used by WhatsApp, Moxie Marlinspike, tries to explain why this choice has been made.
He said: “The choice to make these notifications ‘blocking’ would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn’t, effectively telling the server who it could man-in-the-middle transparently and who it couldn’t; something that WhatsApp considered very carefully.”
This claim is false. Those “blocking” clients could instead retransmit a message of the same length that just contains garbage and this message would just not be displayed by the receiver’s phone. Encryption guarantees the garbage or real messages are indistinguishable in the encrypted form. Hence, this technique would make identifying users with the additional security enabled on a large scale impossible.
Only one message?
There have been claims that only one single message is exposed before the sender notices that something shady might be going on. For technical reasons, only the case with one message can be demonstrated, but there is reason to believe the attack can be extended to a longer conversation.
The Signal protocol allows “lost or out-of-order messages”. Therefore it should be possible for the WhatsApp server to block all “message has been received” notifications from the recipient to the sender for a long conversation while it still correctly forwards the actual text messages. The “receipt” notifications, if encrypted at all, can be distinguished from the normal text messages because they are the ones sent directly after the recipient receives the message.
The users would then only see one tick for all their messages, but many might not realise something isn’t right because the messages would get through and the conversation would carry on as normal. After days, weeks or maybe even months, the described attack can then be launched in order to get a copy of the whole conversation since that point in time.
What Facebook should do is fix the issue, and release the source code of its apps so that the public can verify the integrity of its messaging apps. Facebook’s business asset is not the source code of the app; the source code of many apps with many of the same features is freely available already to competitors. Its real business asset is its massive, almost 2 billion-person user base. The source code of its highly scalable server infrastructure is also a true business asset but that part doesn’t need to be open sourced.
What can users do in the meantime?
I personally use the Signal messenger. It is not perfect but the best I could find. It does not have this particular flaw and I don’t know of any other flaws. Furthermore, it is open source and makes an effort towards reproducible builds. Users should definitely not switch to less secure systems such as SMS or other apps where it is well known that messages are transmitted in plain text.