Safeum

I’m not going to do full protocol analysis for proprietary products with bad documentation. Here are some immediate thoughts based on content on their web page.

“We implemented most recent achievements and cutting-edge technologies in information technology security to develop cryptographic protection mechanisms for our instant messenger.”

Cutting edge is hardly the word when messags are signed with ECDSA: it lacks deniability.

“The level of encryption and its performance meets the most stringent banking standards.”

Marketing (Stef #6: uses marketing-terminology like “cyber”, “military-grade”)

“Your private or business information is totally safe and confidential when using our chat messenger”

Misleading marketing. Implicates lack of threat model.

“In case the user does not trust the cryptographic service provider”

I think this is supposed to say IM server, not cryptographic service provider; it’s the assumption user can trust the cryptographic services provided by Safeum.

“P2P mode which eliminates the technical possibility of interception”

It absolutely doesn’t. It only bypasses Safeum servers so they can not intercept messages.

“Digital signature is another reliable way of data protection during transfer.”

Digital signature is the old way. They take away the improtant aspect of deniability. This should be fixed by changing to MACs.

“Account and chat hacking  as well as spoofing are totally eliminated.”

There is no correlation and saying user can not be hacked is snake oil.

“Hybrid encryption scheme”

Marketing. Key exchange algorithms for symmetric ciphers are as old as Internet.

“This hybrid scheme allows to “take the best” out of each system”

“The round wheels make the car go much faster compared to using square shape tires!”

“SafeUM cryptography experts implemented complex algorithmic optimization”

Sounds like the crypto library used isn’t the standard anymore.

“Our servers have no hard drives.”

Quite frankly, the users don’t care about the server configuration. They want

  1. Functional end-to-end encryption to protect messages from Safeum and all other third parties.
  2. A way to register and use the service anonymously through Tor to hide metadata.

“Third parties can only access these data after going through a complicated legal and bureaucratic procedure.”

Or they can remotely exploit the server.

“We do not have private keys. They are generated on the basis of the pass-phrase that the user must remember.”

User is a bad entropy source; services provided by OS should be used. User passphrase should only protect message logs at rest.

“You can simultaneously use up to three accounts online”

Is same key generated for every device when user enters the same password?

“What do you think about the disable chat history saving feature?”

This is normal setting users are going to want.

“Screenshot protection?”

Yet another snake oil feature. There is no sender based control outside idiocracy.

“Sign up without mobile number using only login and password. This will keep your location secret.”

There’s no Tor/VPN/proxy used and users correlate account name with their banking information. This is an outrageous lie.

“Security is at the core of everything, even in the free version!”

Yet there is paid option for “enhanced encryption”. Boo.

“We do not ask you to take our word! You can check the reliability of SafeUM secure messenger if you wish.”

Great. Where’s the source code? Does your licence allow me to compile a client from the source, or do I need to download the binary and take your word?

Technologies

“Direct dynamic AES* key generation scheme”

If you’re generating new key per message, say so.

“It will take several decades and all the computing power of the globe to decrypt each of your messages.”

Crypto is not broken, it is bypassed / undermined. The figures are way below real numbers: It would take millions of years, not decades to decrypt a single message.

“Besides reliable encryption SafeUM also guarantees sender authenticity and data integrity by implementing digital signature mechanism.”

Nothing is said about fingerprint verification.

“AES block cipher with CBC mode used”

There are faster alternatives that are harder to implement incorrectly.

“PRNG generates a pseudo-random sequence of numbers”

The correct term is CSPRNG and it’s not generally advertised as a technology.

TLS v2.0 (SSL v3.0) is used as en encryption transport layer for WebSockets

There is no TLS 2.0. If it’s actually SSLv3 — they should immediately get rid of it: http://disablessl3.com/

“For data transfer and storage, the AES encryption is applied (256 bits key in CTR mode of operation).”

But, they just said it was CBC mode. Okay. CTR it is.

Conclusion

Would you pay for service that offers less features, less security, less privacy, less anonymity than a gratis alternative, such as Signal? Either the marketing hasn’t consulted the tech department, or the tech department has no real expertise in crypto. Avoid.

Current state of TLS

Qualy’s SSL test is an extremely useful tool when evaluating security of a specific web domain, but it doesn’t provide statistics. Those that do, don’t provide a database from which to parse information on TLS configurations. GCHQ has program called FLYING PIG that follows trends of TLS. We need an open source version of that.

I found a great tool to scan domains with: CipherScan.
Huge applause to Julien Vehent et. al. for their effort.

I wrote a quick script (preProcess.py) that converts Alexa’s top 1M domains csv file to simple list while preserving order. It also removes any duplicates and paths that have made it to the original list. I then wrote another script (multiCS.py) that makes sure there are N instances of CipherScan analysing different domain at a time. These aren’t my finest Python but they get the job done.

Here’s the compressed file of everything you need to start analysing web-domains: files.tar.gz (SHA256 67dee99416c055f80abfccdecb11ad1acdb22ed05fb570f3fea85e2672113061)

Here are the top 1M domains analysed (I hereby place the dump into the public domain). dump.tar.gz (SHA256 e4889418993a3d34d7e362d4cec024437aa62d8c47347f7c7ca45b06419a5f0f)

Now for the analysis; Good ones first. Here are the domains that support best cipher-suites:
1. ECDHE_RSA_CHACHA20_POLY1305 (20 685)
2. ECDHE_ECDSA_AES256_GCM_SHA384 (32 375)
3. ECDHE_RSA_AES256_GCM_SHA384 (327 758)
ECDHE_ECDSA_AES128_GCM_SHA256 (32 380)
5. ECDHE_RSA_AES128_GCM_SHA256 (328 439)

Being on this list isn’t enough. Just because cutting edge crypto is supported, doesn’t mean client and server actually use it. Not all clients support latest cipher suites, so servers want to retain backwards compatibility to ensure users can connect to the web page. This causes problems, as MITM can perform a TLS downgrade attack:

Alice to MITM:  I know AES, RC4 (~broken), RC2 (broken).
MITM to server: I know RC4 and RC2.
Server to MITM: The strongest common cipher is RC4, use that.
MITM to Alice:  The strongest common cipher is RC4, use that.

Server makes the final choice based on client’s proposals, and the client will either accept, or drop the handshake.

Users can configure their browser not to accept insecure ciphers, but not everyone has the skills to do that. The only way to pressure developers to update their code and users to update their clients is to stop supporting old crypto. Just as we need to know which companies send passwords to users over unencrypted email, we need to know which companies leave all their customers vulnerable, so that a minority of users with their “Netscape browsers” wouldn’t get upset, when they need to update. Based on Alexa’s top ~1M domains, I created a set of lists of those that support a weak configuration.

No encryption:
Not available (420 042)
Not working (164 388)

Weak key exchange:
RSA-512 (34 168)
RSA-1024 (4 587)
DHE-512 (26 854)
DHE-1024 (233 043)

Weak symmetric ciphers:
RC2 (34 866)
RC4 (292 561)
DES (61 692)
3DES (499 522)

Weak MAC:
SHA1 (567 302)
MD5 (212 612)

Weak protocol:
SSL2 (35 243)
SSL3 (678 079)
TLS1 (565 269)
TLS1.1 (976 863)

Weak Certificate:
RSA-1024 signature (46)
DSS signature (14)
SHA1 fingerprint (175 942)
MD5 fingerprint (4 689)

Wickr

Secure: Send and receive secure messages, documents, pictures, videos and audio files.

Anonymous: Your conversations can not be tracked, intercepted or monitored. Your Wickr ID is anonymous to us and anyone outside your Wickr network.

No Metadata: Wickr removes all records, geotags, and identifying information from your messages and metadata

Shredder: Irreversibly remove all deleted messages, images and video content from your device.

Configurable timer: Set the expiration time on all your mesaging content.

Sounds promising. Yet we can’t confirm any of those. Wickr is proprietary software. Why? There are successful products such as TextSecure that are free and open source. They are doing great. Thus, there is no economical incentive not to make Wickr GPL licenced, let alone open source. Having to trust the company is the problem and Wickr should be disregarded at this point by anyone who values their privacy. Audits of source code by independent companies are excellent. Here they do not matter. It’s like RSA saying “don’t worry. BSAFE was audited by the NSA.”

After the source code is released, and the licence allows users to compile their own clients from it (preferrably Wickr should come with a script that produces a reproducible build), we can reliably analyze their claims. What worries me is, some of them are either false, or not up-to-date:

CaptureEncryption

Wickr uses ECDH521 key exchange + AES256 for symmetric encryption. Despite forward secrecy, there is no ratcheting or self-healing property. Long term MITM can be established at any point with single key-exfiltration attack against either end point.

Fingerprint verification

Fingerprint verification is hidden behind a tap on the user avatar. Anyone who doesn’t know better won’t be using the feature. Since the lock icon is the same color as all symbols, there’s no way to immediately figure out that the security is not at adaquate level.
1
Fingerprint verification can be done through the MITM using video. This is actually a decent method if recipient is known (and assuming HSA morphing technology hasn’t reached this point yet). The issue is in usability. After receiving the video, it must be viewed by holding the camera icon pressed. If user accidentally presses the accept button right below the camera icon, the client assumes key verification was valid and assigns green key-icon for user: “verified”.

2The sender will have to either record a new video, or resort to less private options:

The fingerprint can also be sent via inherently insecure SMS and unencrypted email. Even the suggestion of using these channels reaks unprofessionality from Wickr team. There is no way for users to display fingerprints on screen of their devices, thus there is no high-assurance way to verify fingerprints on the spot.

The explanation on importance of fingerprint is bad:fingerprintFingerprints are not “optional”. They are the only thing that prevents MITM attacks against user. In a sense, they’re not lying when they say it provides added level of security. They just fail to mention there is zero security without verification.

“That friends are who they say they are”.

Providing this level of misinformation is scary. It will lead to confusions where people will do alternative challenge-responses through the MITM:

“What movie did we watch yesterday?”.

“-Titanic”

“-Okay it must be you.”

This section should have carefully explained, that it ensures that end-to-end encryption is done between Alice’s and Bob’s devices and not Alice and HSA, and Bob and HSA.

Illusion of sender based control:

sbcThis is pure security through idocracy. The next picture taken with external camera explains:

3

I found many reasons to use TextSecure over Wickr. I found zero reasons to use Wickr over TextSecure; Vote with your feet.

PS. Wickr, check your hiring priorities:

openjobs

TFC-CEV

The bulk CNE is a problem. In previous article I discussed how to achieve end point security with three computer setup and data diodes. Tinfoil Chat or TFC, is the author’s FOSS suite of programs written in Python (auditability and inability to distribute insecure binaries was a priority), that makes real-time chat possible with the described setup. Here’s the visualization of TFC-CEV:

TFC overview 2TFC-CEV addresses many problems in current end-to-end encryption. First of all, keys are generated by mixing /dev/(u)random with entropy obtained from free circuit design HWRNG. TFC-CEV uses four additive ciphers to create provably secure cascading encryption:

TFC-CEV encrypts messages with Keccak-CTR (512-bit key), XSalsa20 (256-bit key), Twofish-CTR (256-bit key) and AES-GCM (256-bit key). Authentication is done with three algorithms: GMAC in AES GCM (256-bit key), HMAC-SHA512 (512-bit key) and SHA3-512 MAC (1144-bit key).

XSalsa20, Twofish and AES use the first 256 bits of their 512-bit keys. HMAC-SHA512 and Keccak-CTR both use the entire 512-bit key. SHA3-MAC uses the first 1144 bits of three individual 512-bit keys. All eight 512-bit keys are individually hashed through SHA3-512 hash function after each message to provide forward secrecy.

TFC has three main programs, (Tx.py, Rx.py and NH.py) that move data unidirectionally between the two data diode channels. NH.py also talks to Pidgin IM client. (The main reason Pidgin was chosen was because it’s included with Tails.) Past vulnerabilities of Libpurple are not a problem as TFC assumes NH is in complete control of HSA. In addition to main programs, key generation is done with a set of additional programs.

TFC is also the first(..?) program to provide trickle connection, where TxM outputs a contant stream of messages to recipient. This prevents any adversary who inserts malware into NH or RxM from finding out when, and how much communication is actually taking place. The receiving device does not display noise messages so it doesn’t have noticable effect on the conversation. These noise messages are only sent if user has not added messages to queue. It’s actually a bit more complicated than that since TFC also transmits commands directly from TxM to RxM, encrypted with it’s dedicated set of keys: Every command with volatile effect or one that contains private information is encrypted with the same technique as messages. Before outputting the command packet, the trickle connection independently flips a coin to decide whether to output it, or a noise message. The same goes with messages. Before sending a message (or a part of it), the coin is flipped to decide whether a noise command packet is sent instead.

You can read about TFC in more detail from the links below

Whitepaper

Manual

Project pages on GitHub for OTP version and Cascading Encryption Version (CEV).

Reception

The feedback on project has been very positive, excluding the feedback on my cocky title “zero-day immune” from/r/netsec, but I’m glad someone got the idea.

The project has received a lot of constructive critisism too. It’s always a learning process to get things right but in the end it has been worth it; To quote security researcher Nick P:

Far as Tinfoil Chat, I’ve recommended it heartily as a project to use and improve. Markus Ottela took what he learned from prior work and our comments at Schneier’s blog (esp on data diodes & physical separation) to create a unique, solid design. He’s been posting on the blog for feedback for months, we’ve suggested many risk mitigations (eg polyciphers, continuous transmission), and he’s integrated about every one into his system. Most just ignore such things or make excuses: Markus is 1 in a 1,000 in (a) applying what’s proven and (b) not letting problems become legacy “features.”

Source

End point security

The article on Bulk CNE showed how exploitation of client devices shreds the security of current top-of-the-line end-to-end encryption tools. While it’s not the job of secure messaging client to patch vulnerabilities in host OS, it is required that they are functional in a secure configuration. What would that look like? In the case of current E2EE software, the Trusted Computing Base (TCB) is located on the networked device.  This makes it easy for state-sponsored malware to exfiltrate keys and plaintexts.

23 - EPS 1

It might seem like an impossible task to solve; The software always has vulnerabilities and because modern client is connected to Internet constantly, the window of exposure to exfiltrate keys remains open. TextSecure does help mitigate against the threat of key exfiltration with it’s DH ratcheting. The second the MITM attack ends, the attacker needs to re-exfiltrate all keys for the next MITM attack to succeed.

But we can do better than that. Google presented Project Vault in June. It’s a smart card in the shape of microSD card, that is able to store private keys and encrypt data inside it’s secure cryptoprocessor. This is a great improvement, in the sense it guarantees that despite end point compromise, keys remain secure and encryption is done properly (assuming smart card has no backdoors). However, in the case of instant messaging (IM), it’s not enough against endpoint compromise.

24 - EPS 2As you can see, the sensitive plaintext messages are passed to Project Vault through the insecure host OS. Additionally, all replies from contacts that the TCB decrypts are displayed via the host OS. While smart cards have many use cases, this does not seem like a viable one. We need an environment where the keyboard and display connect directly to TCB. So what should we do? Let’s quote two cryptographers:

Each of the [reviewed] apps seem quite good, cryptographically speaking. But that’s not the problem. The real issue is that they each run on a vulnerable, networked platform. If I really had to trust my life to a piece of software, I would probably use something much less flashy — GnuPG, maybe, running on an isolated computer locked in a basement.

Matthew Green

Assume that while your computer can be compromised, it would take work and risk on the part of the NSA — so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.

Bruce Schneier

This approach would use two computers instead of one. In essence it looks like this: 25 - EPS 3However, this system only works, if the HSA is unable to produce malware that can spread on USB drives. This feature isn’t very hard for the HSA to do compared to exploit research. Let’s go through the attack step-by-step.

  1. We encrypt a message using airgapped computer that functions as the TCB. We move the ciphertext from the airgapped PC to networked PC using a never-before used USB-drive.26 - EPS 4
  2. We send the ciphertext to our contact from the networked PC. We then copy the reply from our contact to USB drive and move the reply back to airgapped PC for decryption. Since the networked computer is infected, the infection spreads to airgapped PC via that USB-drive.27 - EPS 5
  3. When we write our second message, all encryption keys are transmitted by malware inside the USB-drive to the infected, networked computer, that will then exfiltrate those keys back to adversary. Game over.28 - EPS 6
  4. Now, while this configuration is not secure, it does show us an important thing. Before we transferred the reply to airgapped PC, our TCB was secure. We can send as many messages we want using new a USB-drive every time, throwing the used ones in the shredder. (Don’t worry about the costs, we wont be using USB-drives after this article). If we stop sending messages, we can also receive as many messages to the airgapped device, using a new USB-drive every time. The keys are still in our possession. The compromise of keys/plaintexts happens only after we send a message after decrypting one or more messages. See where I’m getting with this? If we split the two secure processes to two airgapped computers — TCBs with dedicated purpose to either encrypt or decrypt messages, we can repeat the two first steps in isolation, forever.29 - EPS 7

This does in fact work. The lower (grey) TCB stays clean when it only outputs data. Clean system does not output keys on it’s own. The upper (red) TCB is compromised, but keys and decrypted plaintexts stay inside endpoint, because the device only receives data.

Now, let’s remove the $4 cost per message. Douglas W. Jones and Tom C. Bowersox wrote a fantastic paper on RS232 data diodes. Data diode is a device, that uses laws of physics to enforce direction of data flow in asynchronous data channels. This approach is so secure, that the commercial models have received EAL7+ (best possible) Common Criteria certification. The cost of these devices however, is nowhere near suitable for end-users. Here’s how we can construct a sub-$10 data diode for RS232 (serial port):

rs232ddThe transmitting side has two LEDs connected in parallel with opposite polarities. The receiving side has two phototransistors that regenerate the signal by outputting power from 6V batteries (opposite polarities in relation to Rx and GND), when the corresponding phototransistor starts receiving light. This optical gap is guaranteed one way, because while the LEDs do generate a very low amount of current when light is cast upon them, phototransistor do not emit light.

When the data diode is used to replace the USB drives in the three-computer setup, here’s how how the final assembly looks like:

30 - EPS 8

Messages are written to TxM (transmission module) and they are received by the RxM (receiver module) devices of both users. NH (network handler) acts as a converter between serial ports and network. The data diodes provide high assurance protection against exfiltration of sensitive data. TxM and RxM don’t have to be commercial HSM devices, a netbook should do fine for most situations, provided that data diode is the only connection to outside world: WLAN, Bluetooth interfaces must be removed, together with speakers, webcam and microphones. Batteries should be charged only when the device is powered off. This approach sets one time price-tag to end-point security.

Now I should immediately discuss the three vulnerabilities in this approach.

Firstly, if the TxM is compromised during setup, the malware can exfiltrate keys. However, this kind of compromise can be confirmed to some extent. As the TxM never knows what’s on the reception side of the data diode, the receiving end can be plugged into spectrum analyzers. These devices can see hidden signals, because no information is missed as the displayed output is the result of FFT calculations. Even if this is not done, compared to continuous window of exposure of other E2EE systems, a ~10 minute window during TxM setup is ground-breaking improvement.

Secondly, while the RxM has no window of exposure to exfiltrate data, the window of exposure to exploit RxM remains open. Thus, the malware on RxM can show arbitrary messages at any time. However, because there is no way for attacker to find out what messages users actually send each other, any displayed message is highly likely to be out of context (unless the malware has a sophisticated AI algorithm). Additionally, users can compare the log files of their RxMs to detect if they include messages that the other participant has not typed.

Thirdly, the endpoint is only as secure as the physical environment around it. Covert microphones, keyloggers and video cameras bypass the encryption completely. Physical compromise of TxM also compromises the security. However, these can be claimed to be actual targeted attacks. You can’t copy-paste human beings to spy on every user individually, the way you can copy-paste state-sponsored malware that has more or less ubiquitous access.

There is however one passive, remote attack that has been public knowledge since 1985:

flowersThe average consumer is unable to provide high assurance EMSEC to their end points against TEMPEST monitoring. Briefly explained, all keyboard and display cables act as weak antennas when data is passed through them. By collecting these signals using sophisitcated equipment, the attacker is able to log keystrokes and view screen content. An active version of this attack, done by illuminating retro-reflector implants, grows the range from “across the street” to more than 10 km. As far as I know, TEMPEST still requires a tasked team, and even if it could be done with something like SIGINT drones, there would be no way to avoid linearly increasing cost when scaling up surveillance. Currently, such an attack would be too expensive. The day it isn’t, you’ll know:

1413455708725_wps_1_Drones_over_London_Mock_uMaybe.

The physical attacks are the proper balance between privacy and security. As long as the privacy community keeps arguing that “unscalable” exploits are functional alternative to not backdooring proprietary software and services, we will submit ourselves to false dilemma on LEAs’ terms: “Let’s stop doing mass surveillance that hurts company reputation and switch to mass surveillance where companies can have plausiable deniability.” Unless we start communicating with high-assurance setups that are secure against mass-scale end point exploitation, neither outcome in the debate provides a real solution to stop mass surveillance.

That being said, let’s discuss the one last issue.

Key distribution

We can’t trust the possibly compromised RxM generating private DH-values / shared secrets from received public DH values (the generated value might have been sent in by the HSA). If we want to do DH key exchange with three computers we must

  1. Generate private DH-value on TxM and move it with one-time USB-drive or through additional data diode directly to RxM.
  2. Type the very long DH public value from RxM to TxM by hand (we can’t have automated input to TxM at any point after setup).
  3. We must then authenticate the integrity of received public values, preferrably with face-to-face meeting (as discussed in article on Axolotl).
  4. Finally, once the TxM has generated the shared secret, we must again either by using a one-time USB-drive, or through additional data diode, move the shared secret directly to RxM. After this, KDF can generate the two symmetric keys from the shared secret.

This is very inconvenient unless the participants live across the globe and physical meeting would consume even more of their resources.

What can be done instead, is generate the symmetric encryption keys on TxM, and move them directly to the two RxM devices, either with a USB-drive or a data diode. The latter is more secure as sanitation of USB-drives is much less burdensome, but it requires a lot more hassle during key-exchange rendezvous.

Let’s discuss the misconceptions pre-shared keys have:

“Physical key exchange is too inconvenient”

Physical key exchange is inconvenient, but it’s the highest assurance mehtod to provide integrity there is. Even if you were using Axolotl (TextSecure/Signal), you would have less assurance when verifyig fingerprints over the phone. You should always compare fingerprints face to face. For this TextSecure provides a convenient QR-code fingerprint verification feature. In my article on Axolotl, I made a proposition that would speed up the current QR-code fingerprint verification in TextSecure by three-fold. Compared to my proposal, exchange of USB-drives is four times faster (copying the keyfile and hammering the USB-drive takes some time too of course).

twitterWe can have forward secrecy by passing the encryption key through PRF after each message. The ciphertext just needs to include information about how many iterations has the initial key been run through the PRF. We don’t even have to worry about keys getting out of sync if some packets are not received. The only problem is, if packets are not received in order, any packet arriving behind a more recent one becomes undecryptable (unless old keys are not immediately overwritten).

Actually very few modern cryptographic properties are lost with the three computer setup.

Since messages are authenticated with symmetric MACs, we have deniability (the recipient also has the key that can sign messages).

Lack of DH ratcheting does take away the self-healing properties that Axolotl has. But since there is no remote zero-day exploit that is able to exfiltrate keys, it’s unlikely this feature will be needed. Self healing might not even do the trick as in the case of TextSecure, compromised TCB might generate insecure keys or covertly exfiltrate keys and/or plaintext messages.

“But who’s going to write the program that supports this type of hardware layout?

I already did.

Bulk CNE

End-to-end encryption is hailed as the solution to end mass surveillance as it is. This approach is certainly better than previous generations of protocols, but it still falls short on two aspects:

1. Random number generators may be weak

22 - Lack of randomness

ECRYPT II recommendations state that 96-bit keys are secure only until 2020. 256-bit keys on the other hand are secure to the “forseeable future”.

If the encryption keys have low entropy source, they might be predictable. This would mean the adversary is able to try all different combinations of possible keys and decrypt the communication without attacking the algorithm or devices. NSA was revealed to have undermined the Dual EC DRBG. This might indicate NSA has also undermined hardware random number generators; Such an attack has already been proved to be possible against the ones used in Intel processors.

2. End point security still sucks.

For the longest time, the infosec community thought hacking of clients as a targeted attack. The Snowden leaks however, have shed new light into bulk computer network exploitation (CNE).

The Intercept wrote an article about it. Wired wrote another. It has also been discussed in various conferences and speeches by security experts and privacy advocates. Below is a project where I collected statements regarding (bulk) CNE:

What implications does this have against end-to-end encrypted tools? Let’s take current top-of-the-line protocol, Axolotl and it’s implementation, TextSecure. By exploiting the end point and stealing the private keys before the intial key exchange, the HSA is able to peform an undetectable MITM attack against the users:

20 - TextSecure MITM with exfiltrated keysThe window of opportunity for this attack is very small for TextSecure. However, a persistent malware can exfiltrate messages directly from the device. This can be done at any time.

21 - TextSecure Keylogger

A smartphone is simply not a secure trusted computing base to perform encryption on. The next article will discuss how to secure the end points against exfiltration of keys and plaintexts.

WhatsApp vs TextSecure – a closer look at Axolotl

It was recently announced that WhatsApp is switching to end-to-end encryption. And not just any protocol, but Axolotl by Open Whisper Systems.

Axolotl and the implementations

Here’s how Axolotl is implemented in WhatsApp and TextSecure

15 - Axolotl WhatsApp Overview 16 - Axolotl TextSecure Overview

NOTE: The colors are used to simplify the ECDHE down to this color-mixture example, as math would take too much room.

Axolotl is an outstanding protocol that uses ratcheted Diffie-Hellman key exchanges. Instead of long term singing key, Axolotl initiates key exchange with three ECDH operations, one of which is done with long term DH private value (Identity key).

This way Axolotl provides forward secrecy while removing the need to advertise public DH values. Messages are encrypted with message keys derived from chain keys, that derive from constantly changing root key. Unlike OTR, same symmetric key is not used until the next DH handshake completes. Instead, the chain key is run through HMAC-SHA256 to ensure forward secrecy for each message separately. The recipient is able to regenerate all the message keys through cyclic hashing of current chain key, to decrypt any messages sent while their client was not replying.

The root key is renewed with cyclig hashing through HMAC-SHA256 and HKDF, together with entropy obtained from constant generation of DH shared secrets, so the users are able to retrace the trust of current root key back to initial key exchange.

The two implementations look almost completely identical. Can you spot the difference?

WhatsApp does it wrong

There is no fingerprint generation in WhatsApp. What this means is, WhatsApp provides almost as bad security as iMessage. Assuming WhatsApp did not make any other changes to Axolotl in their proprietary source code, an undetectable man-in-the-middle attack can be generated either from the backbone of Internet by HSA, or from within the WhatsApp server (compromised either with malware or NSL). Whether Axolotl uses additional TLS to protect DH-values between the client and server does not matter; PKI provides no meaningful protection against HSAs.

In iMessage, another public key can be added, or the current public key can be switched at any time, without the user noticing. In the case of WhatsApp, the MITM can only be done during the initial handshake (assuming the proprietary code does indeed pass the previous root key to cyclic hashing process).

Here’s how the MITM against WhatsApp works (only the key exchange part is shown):

17 - Axolotl WhatsApp MITMThe missing fingerprint feature makes WhatsApp completelty insecure. End-to-end encryption is defined usually as “user is in control of the encryption keys”. In the case of public key cryptography, it’s more complex. People seem to think it’s enough that they are in control of secrecy of their private decryption and signing keys.

What they don’t realize, is they must also be in control of integrity of all public encryption and signing keys of their peers. If this is not the case, i.e. if the user can not be sure he or she is encrypting with the correct key, the decryption key may not belong to their contact, but to a man in the middle.

If a secure messaging app prevents the user from verifying the public keys, the claim of the software being end-to-end encrypted is unarguably snake oil. The whole point of end-to-end encryption is to make it impossible for the server and any active MITM attacker to access messages. Companies will want to avoid legal issues so keep on eye for carefully worded comments such as “our company is not in possession of private keys of the users“.

TextSecure does it right

In the case of TextSecure, the users have a way to ensure they have actually received the public DH-value of their contact, by checking each other’s fingerprints:

fingerprintsHere’s how verifying the fingerprint in TextSecure detects the MITM attack:

18 - Axolotl TextSecure MITM

But what about convenience? Every once in a while I come across a comment like

People use [fingerprintless products] to make their lives a little bit easier, not harder by checking some set of numbers they don’t even know what it’s for.

This style of argumenting is just horrible. The decentralized security of E2EE absolutely depends on users verifying indentities. This is an inherent problem in cryptography and no application can fix it. Having a trusted third party who manages keys for you takes the end-to-end encryption factor from that software. This is just an example of a larger phenomenon:

Security is a process, not a product. Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products.

Bruce Schneier

To be frank, having the tiny bit of convenience from not bothering to check keys, can result in very inconvenient jail time if unjust laws and/or paranoid nation states are able to access your communication. This is especially scary considering parallel contruction: you don’t have to be engaged in serious criminal activities for this type of surveillance to occur.

So, saying fingerprint checking is inconvenient is not only short-sighted, it’s also impolite: Just because you don’t mind sharing your private life and giving HSAs leverage that possibly prevents you from changing the world, doesn’t mean the contact doesn’t either.

Improving convenience

TextSecure already makes comparing fingerprints a breeze with QR-codes if the users have Barcode Scanner installed. Let’s take a closer look:

Currently, opening the first function (QR code or the scanner) from chat takes four taps. Then users have to perform first scan. Then users make three more taps to move to the opposite function, perform another scan, and then they do two more taps to get back to chat.

Here’s how OWS could significantly speed up the process even further:

Initially, TextSecure should come bundled with barcode scanner. The software already defines the Alice and Bob -roles for users by comparing which one has larger base key. The “End secure session” must be moved into the triple-dot menu at the top-right corner.

  1. Pressing the lock icon opens the barcode scanner for Bob, and QR-code for Alice.
  2. Scanning both fingerprints from same QR-code allows Bob to confirm both that
    1. Bob has received correct DH identity key from Alice
    2. Alice has received the correct DH identity key from Bob
  3. Once the key has been scanned, the scanning device will report either with “Identities verified” or “MITM risk“. The notification and QR code is closed with the second tap.

Here’s a mock up of how it would look

TextSecure New FingerprintsThis is less inconvenient than sending a picture from the phone. Currently the fastest QR-scan I managed to do with a friend took around 30 seconds. With my proposal, the process would appear to take way less than ten seconds.

Remote fingerprint verification

You have limited assurance when using the phone: HSAs have had 16 years to improve on this. If you’re a target, you might not be secure against such active voice-morphing attack.

Sometimes people send me fingerprints through TextSecure. This is a bad idea. Here’s how the MITM can trivially remove all assurance from this practice:

19 - Axolotl TextSecure verification through MITM

While you can agree on obscure tactics on how to stegano-graphically hide pieces of the fingerprint into messages, you need yet another MITM-free channel to do this. So instead of agreeing on these practices face-to-face, just exchange the fingerprint.

iMessage

Apple claims to be resisting a court order to decrypt iMessages in real time. They claim it’s a technical impossiblity. This claim deserves a closer inspection. Page 35 of Apple’s Security guide states the following:

Apple does not log messages or attachments, and their contents are protected by end-to-end encryption so no one but the sender and receiver can access them. Apple cannot decrypt the data.
iMessage connects to Apple’s IDS directory service (key distribution server) and Apple Push notification server (IM server) over TLS DHE. The following diagrams show how keys used in end-to-end encryption are exchanged:

1. IDS delivers client the public keys of contact
11 - iMessages KE overview2. APN delivers signed and encrypted messages:
12 - iMessages COM overview cropThe messages are encrypted with unique 128-bit AES keys, encrypted with 1280-bit RSA keys. Data is signed with 256-bit ECDSA (Nist P-256 curve). The 1280-bit RSA has the computational complexity of 289.46, which is way below current recommendation (2128). In fact, RSA-1280 is almost a hundred times weaker than legacy standard level of ECRYPT (296) evaluated to be secure until 2020. So there’s a possibility that the encryption keys are within computational reach of HSAs, if not now, perhaps in the near future.

What’s worse than crypto dragging behind standards, is it turns out the implementation of end-to-end-encryption (E2EE) in iMessage is insecure by design. Matthew Green recently wrote a great article about this, so I’m keeping the explanation short.

Above, the quote from Apple stated they “cannot decrypt data.” This is only a half-truth, as Apple decides what encryption keys you use to protect those keys, that are used to protect your messages. This is in direct conflict with Stef’s 7 rules to detect snake oil:
#4 “The user doesn’t generate, or exclusively own the private encryption keys”
I should note that iMessage is also in violation of
#1 “Not free software”
#3 “Runs on a smartphone”
#5 “There is no threat model”
#7 “Neglects general sad state of host security”
but the #4 is in my opinion the most important. User will have to rely on blind faith, that the client is using the actual public encryption key of the contact to encrypt symmetric key, and also that the public ECDSA signature verification key belongs to the contact. Under an NSL, Apple might be coerced into performing the following attack:

1. Server is coerced to send public encryption and signing keys of attacker instead of contact’s.
13 - iMessages KE server compromise2. Once the public keys have been distributed, here’s how the MITM attack takes place when Alice sends message to Bob (the reply from Bob is the exact mirror of this process, so it was left out from the already huge scene):
14 - iMessages communication server compromise crop
I should also mention, that a DHE MITM with the server or CA private key allows the HSA to pose as the IDS and APN servers. So iMessage users might be under MITM attack by HSAs even if they had ever approached Apple.

The standard approach to solve this simple MITM attack is to compare the fingerprint of public signing key via off-band channel, where the intergrity of data is guaranteed. iMessage does not have the fingerprint checking feature, so there’s no way to detect MITM attacks.

tl;dr? Vote with your feet.

MITM attacks with CA private key

There are tons of messaging tools and servers out there, each with their own private keys. Stealing the private key from every server takes effort. After Snowden leaks, there has been a push towards DHE so mass surveillance with passive decryption is slowly depricating. With DHE, a MITM attack is required every time. Is there an easier way to make undetectable MITM attacks against any service? Of course there is.

A government agency can either compel a certificate authority (CA) to issue false certificate, or request their private key to generate as many false certificates as desired. Does the surveillance equipment exist? Yes:

pfHas this practice been documented? It appears so:

It has been argued that issuing a subpoena for CA private keys is not needed, because a browser would trust certificate signed by any CA, Turkish Government for example. However, all it takes to detect the attack is clicking the lock icon in address bar.

CAU

Guess which one is the rogue certificate.

So it’s clear that the extremely risk averse HSAs only want to use the original CA to sign their false certificate / key of the original CA to remain undetectable. Once the false certificate has been created, here’s how the MITM attack works when server uses–

RSA key exchange:

9 - TLS RSA MITM CA keyDHE key exchange:

10 - TLS DHE MITM CA key
Yes. Google Chrome would detect if the pre-installed Google’s certificate suddenly changed. A browser however does not contain the certificates for every website, otherwise we would not need CAs.

Chrome’s installer is also signed by a key, which is in turn signed by a CA. Your operating system would not detect malicious version of Chrome that HSA has signed with CA private key.

Additionally, every time you open a Chromium installer, you completely bypass security provided by CAs by executing binary from “Unknown Publisher”:
unknown publisher
To summarize:

Private messaging with TLS provides no expectation of privacy against HSAs. There are of course many other bad actors who try to obtain credit card data etc. so using TLS works when there is commercial interest for trust between user and server. However, in private messaging, the server is always an untrustworthy man in the middle. We need something more private; End-to-end encryption.

MITM Attacks with server private key

As discussed in the section about passive RSA decryption, keys can be obtained from server either with NSL or malware. RSA keys not only make passive decryption trivial, they also enable completely invisible MITM attacks. Here’s how the attack works against key exchange protocols:

RSA:

7 - TLS RSA MITM Server key

DHE:

8 - TLS DHE MITM Server keySince DHE is anonymous key exchange protocol, the only way for client to know that the DHE parameters come from the server is the signature of DH-values, verified using the CA-signed, public signing key of server. The client is unable to detect MITM attack, if the HSA signs it’s DH values using the private signing key of the server.

There exists no way for client (not even certificate pinning) to detect a MITM based on stolen server private key.