TFC-CEV

The bulk CNE is a problem. In previous article I discussed how to achieve end point security with three computer setup and data diodes. Tinfoil Chat or TFC, is the author’s FOSS suite of programs written in Python (auditability and inability to distribute insecure binaries was a priority), that makes real-time chat possible with the described setup. Here’s the visualization of TFC-CEV:

TFC overview 2TFC-CEV addresses many problems in current end-to-end encryption. First of all, keys are generated by mixing /dev/(u)random with entropy obtained from free circuit design HWRNG. TFC-CEV uses four additive ciphers to create provably secure cascading encryption:

TFC-CEV encrypts messages with Keccak-CTR (512-bit key), XSalsa20 (256-bit key), Twofish-CTR (256-bit key) and AES-GCM (256-bit key). Authentication is done with three algorithms: GMAC in AES GCM (256-bit key), HMAC-SHA512 (512-bit key) and SHA3-512 MAC (1144-bit key).

XSalsa20, Twofish and AES use the first 256 bits of their 512-bit keys. HMAC-SHA512 and Keccak-CTR both use the entire 512-bit key. SHA3-MAC uses the first 1144 bits of three individual 512-bit keys. All eight 512-bit keys are individually hashed through SHA3-512 hash function after each message to provide forward secrecy.

TFC has three main programs, (Tx.py, Rx.py and NH.py) that move data unidirectionally between the two data diode channels. NH.py also talks to Pidgin IM client. (The main reason Pidgin was chosen was because it’s included with Tails.) Past vulnerabilities of Libpurple are not a problem as TFC assumes NH is in complete control of HSA. In addition to main programs, key generation is done with a set of additional programs.

TFC is also the first(..?) program to provide trickle connection, where TxM outputs a contant stream of messages to recipient. This prevents any adversary who inserts malware into NH or RxM from finding out when, and how much communication is actually taking place. The receiving device does not display noise messages so it doesn’t have noticable effect on the conversation. These noise messages are only sent if user has not added messages to queue. It’s actually a bit more complicated than that since TFC also transmits commands directly from TxM to RxM, encrypted with it’s dedicated set of keys: Every command with volatile effect or one that contains private information is encrypted with the same technique as messages. Before outputting the command packet, the trickle connection independently flips a coin to decide whether to output it, or a noise message. The same goes with messages. Before sending a message (or a part of it), the coin is flipped to decide whether a noise command packet is sent instead.

You can read about TFC in more detail from the links below

Whitepaper

Manual

Project pages on GitHub for OTP version and Cascading Encryption Version (CEV).

Reception

The feedback on project has been very positive, excluding the feedback on my cocky title “zero-day immune” from/r/netsec, but I’m glad someone got the idea.

The project has received a lot of constructive critisism too. It’s always a learning process to get things right but in the end it has been worth it; To quote security researcher Nick P:

Far as Tinfoil Chat, I’ve recommended it heartily as a project to use and improve. Markus Ottela took what he learned from prior work and our comments at Schneier’s blog (esp on data diodes & physical separation) to create a unique, solid design. He’s been posting on the blog for feedback for months, we’ve suggested many risk mitigations (eg polyciphers, continuous transmission), and he’s integrated about every one into his system. Most just ignore such things or make excuses: Markus is 1 in a 1,000 in (a) applying what’s proven and (b) not letting problems become legacy “features.”

Source

End point security

The article on Bulk CNE showed how exploitation of client devices shreds the security of current top-of-the-line end-to-end encryption tools. While it’s not the job of secure messaging client to patch vulnerabilities in host OS, it is required that they are functional in a secure configuration. What would that look like? In the case of current E2EE software, the Trusted Computing Base (TCB) is located on the networked device.  This makes it easy for state-sponsored malware to exfiltrate keys and plaintexts.

23 - EPS 1

It might seem like an impossible task to solve; The software always has vulnerabilities and because modern client is connected to Internet constantly, the window of exposure to exfiltrate keys remains open. TextSecure does help mitigate against the threat of key exfiltration with it’s DH ratcheting. The second the MITM attack ends, the attacker needs to re-exfiltrate all keys for the next MITM attack to succeed.

But we can do better than that. Google presented Project Vault in June. It’s a smart card in the shape of microSD card, that is able to store private keys and encrypt data inside it’s secure cryptoprocessor. This is a great improvement, in the sense it guarantees that despite end point compromise, keys remain secure and encryption is done properly (assuming smart card has no backdoors). However, in the case of instant messaging (IM), it’s not enough against endpoint compromise.

24 - EPS 2As you can see, the sensitive plaintext messages are passed to Project Vault through the insecure host OS. Additionally, all replies from contacts that the TCB decrypts are displayed via the host OS. While smart cards have many use cases, this does not seem like a viable one. We need an environment where the keyboard and display connect directly to TCB. So what should we do? Let’s quote two cryptographers:

Each of the [reviewed] apps seem quite good, cryptographically speaking. But that’s not the problem. The real issue is that they each run on a vulnerable, networked platform. If I really had to trust my life to a piece of software, I would probably use something much less flashy — GnuPG, maybe, running on an isolated computer locked in a basement.

Matthew Green

Assume that while your computer can be compromised, it would take work and risk on the part of the NSA — so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.

Bruce Schneier

This approach would use two computers instead of one. In essence it looks like this: 25 - EPS 3However, this system only works, if the HSA is unable to produce malware that can spread on USB drives. This feature isn’t very hard for the HSA to do compared to exploit research. Let’s go through the attack step-by-step.

  1. We encrypt a message using airgapped computer that functions as the TCB. We move the ciphertext from the airgapped PC to networked PC using a never-before used USB-drive.26 - EPS 4
  2. We send the ciphertext to our contact from the networked PC. We then copy the reply from our contact to USB drive and move the reply back to airgapped PC for decryption. Since the networked computer is infected, the infection spreads to airgapped PC via that USB-drive.27 - EPS 5
  3. When we write our second message, all encryption keys are transmitted by malware inside the USB-drive to the infected, networked computer, that will then exfiltrate those keys back to adversary. Game over.28 - EPS 6
  4. Now, while this configuration is not secure, it does show us an important thing. Before we transferred the reply to airgapped PC, our TCB was secure. We can send as many messages we want using new a USB-drive every time, throwing the used ones in the shredder. (Don’t worry about the costs, we wont be using USB-drives after this article). If we stop sending messages, we can also receive as many messages to the airgapped device, using a new USB-drive every time. The keys are still in our possession. The compromise of keys/plaintexts happens only after we send a message after decrypting one or more messages. See where I’m getting with this? If we split the two secure processes to two airgapped computers — TCBs with dedicated purpose to either encrypt or decrypt messages, we can repeat the two first steps in isolation, forever.29 - EPS 7

This does in fact work. The lower (grey) TCB stays clean when it only outputs data. Clean system does not output keys on it’s own. The upper (red) TCB is compromised, but keys and decrypted plaintexts stay inside endpoint, because the device only receives data.

Now, let’s remove the $4 cost per message. Douglas W. Jones and Tom C. Bowersox wrote a fantastic paper on RS232 data diodes. Data diode is a device, that uses laws of physics to enforce direction of data flow in asynchronous data channels. This approach is so secure, that the commercial models have received EAL7+ (best possible) Common Criteria certification. The cost of these devices however, is nowhere near suitable for end-users. Here’s how we can construct a sub-$10 data diode for RS232 (serial port):

rs232ddThe transmitting side has two LEDs connected in parallel with opposite polarities. The receiving side has two phototransistors that regenerate the signal by outputting power from 6V batteries (opposite polarities in relation to Rx and GND), when the corresponding phototransistor starts receiving light. This optical gap is guaranteed one way, because while the LEDs do generate a very low amount of current when light is cast upon them, phototransistor do not emit light.

When the data diode is used to replace the USB drives in the three-computer setup, here’s how how the final assembly looks like:

30 - EPS 8

Messages are written to TxM (transmission module) and they are received by the RxM (receiver module) devices of both users. NH (network handler) acts as a converter between serial ports and network. The data diodes provide high assurance protection against exfiltration of sensitive data. TxM and RxM don’t have to be commercial HSM devices, a netbook should do fine for most situations, provided that data diode is the only connection to outside world: WLAN, Bluetooth interfaces must be removed, together with speakers, webcam and microphones. Batteries should be charged only when the device is powered off. This approach sets one time price-tag to end-point security.

Now I should immediately discuss the three vulnerabilities in this approach.

Firstly, if the TxM is compromised during setup, the malware can exfiltrate keys. However, this kind of compromise can be confirmed to some extent. As the TxM never knows what’s on the reception side of the data diode, the receiving end can be plugged into spectrum analyzers. These devices can see hidden signals, because no information is missed as the displayed output is the result of FFT calculations. Even if this is not done, compared to continuous window of exposure of other E2EE systems, a ~10 minute window during TxM setup is ground-breaking improvement.

Secondly, while the RxM has no window of exposure to exfiltrate data, the window of exposure to exploit RxM remains open. Thus, the malware on RxM can show arbitrary messages at any time. However, because there is no way for attacker to find out what messages users actually send each other, any displayed message is highly likely to be out of context (unless the malware has a sophisticated AI algorithm). Additionally, users can compare the log files of their RxMs to detect if they include messages that the other participant has not typed.

Thirdly, the endpoint is only as secure as the physical environment around it. Covert microphones, keyloggers and video cameras bypass the encryption completely. Physical compromise of TxM also compromises the security. However, these can be claimed to be actual targeted attacks. You can’t copy-paste human beings to spy on every user individually, the way you can copy-paste state-sponsored malware that has more or less ubiquitous access.

There is however one passive, remote attack that has been public knowledge since 1985:

flowersThe average consumer is unable to provide high assurance EMSEC to their end points against TEMPEST monitoring. Briefly explained, all keyboard and display cables act as weak antennas when data is passed through them. By collecting these signals using sophisitcated equipment, the attacker is able to log keystrokes and view screen content. An active version of this attack, done by illuminating retro-reflector implants, grows the range from “across the street” to more than 10 km. As far as I know, TEMPEST still requires a tasked team, and even if it could be done with something like SIGINT drones, there would be no way to avoid linearly increasing cost when scaling up surveillance. Currently, such an attack would be too expensive. The day it isn’t, you’ll know:

1413455708725_wps_1_Drones_over_London_Mock_uMaybe.

The physical attacks are the proper balance between privacy and security. As long as the privacy community keeps arguing that “unscalable” exploits are functional alternative to not backdooring proprietary software and services, we will submit ourselves to false dilemma on LEAs’ terms: “Let’s stop doing mass surveillance that hurts company reputation and switch to mass surveillance where companies can have plausiable deniability.” Unless we start communicating with high-assurance setups that are secure against mass-scale end point exploitation, neither outcome in the debate provides a real solution to stop mass surveillance.

That being said, let’s discuss the one last issue.

Key distribution

We can’t trust the possibly compromised RxM generating private DH-values / shared secrets from received public DH values (the generated value might have been sent in by the HSA). If we want to do DH key exchange with three computers we must

  1. Generate private DH-value on TxM and move it with one-time USB-drive or through additional data diode directly to RxM.
  2. Type the very long DH public value from RxM to TxM by hand (we can’t have automated input to TxM at any point after setup).
  3. We must then authenticate the integrity of received public values, preferrably with face-to-face meeting (as discussed in article on Axolotl).
  4. Finally, once the TxM has generated the shared secret, we must again either by using a one-time USB-drive, or through additional data diode, move the shared secret directly to RxM. After this, KDF can generate the two symmetric keys from the shared secret.

This is very inconvenient unless the participants live across the globe and physical meeting would consume even more of their resources.

What can be done instead, is generate the symmetric encryption keys on TxM, and move them directly to the two RxM devices, either with a USB-drive or a data diode. The latter is more secure as sanitation of USB-drives is much less burdensome, but it requires a lot more hassle during key-exchange rendezvous.

Let’s discuss the misconceptions pre-shared keys have:

“Physical key exchange is too inconvenient”

Physical key exchange is inconvenient, but it’s the highest assurance mehtod to provide integrity there is. Even if you were using Axolotl (TextSecure/Signal), you would have less assurance when verifyig fingerprints over the phone. You should always compare fingerprints face to face. For this TextSecure provides a convenient QR-code fingerprint verification feature. In my article on Axolotl, I made a proposition that would speed up the current QR-code fingerprint verification in TextSecure by three-fold. Compared to my proposal, exchange of USB-drives is four times faster (copying the keyfile and hammering the USB-drive takes some time too of course).

twitterWe can have forward secrecy by passing the encryption key through PRF after each message. The ciphertext just needs to include information about how many iterations has the initial key been run through the PRF. We don’t even have to worry about keys getting out of sync if some packets are not received. The only problem is, if packets are not received in order, any packet arriving behind a more recent one becomes undecryptable (unless old keys are not immediately overwritten).

Actually very few modern cryptographic properties are lost with the three computer setup.

Since messages are authenticated with symmetric MACs, we have deniability (the recipient also has the key that can sign messages).

Lack of DH ratcheting does take away the self-healing properties that Axolotl has. But since there is no remote zero-day exploit that is able to exfiltrate keys, it’s unlikely this feature will be needed. Self healing might not even do the trick as in the case of TextSecure, compromised TCB might generate insecure keys or covertly exfiltrate keys and/or plaintext messages.

“But who’s going to write the program that supports this type of hardware layout?

I already did.

Bulk CNE

End-to-end encryption is hailed as the solution to end mass surveillance as it is. This approach is certainly better than previous generations of protocols, but it still falls short on two aspects:

1. Random number generators may be weak

22 - Lack of randomness

ECRYPT II recommendations state that 96-bit keys are secure only until 2020. 256-bit keys on the other hand are secure to the “forseeable future”.

If the encryption keys have low entropy source, they might be predictable. This would mean the adversary is able to try all different combinations of possible keys and decrypt the communication without attacking the algorithm or devices. NSA was revealed to have undermined the Dual EC DRBG. This might indicate NSA has also undermined hardware random number generators; Such an attack has already been proved to be possible against the ones used in Intel processors.

2. End point security still sucks.

For the longest time, the infosec community thought hacking of clients as a targeted attack. The Snowden leaks however, have shed new light into bulk computer network exploitation (CNE).

The Intercept wrote an article about it. Wired wrote another. It has also been discussed in various conferences and speeches by security experts and privacy advocates. Below is a project where I collected statements regarding (bulk) CNE:

What implications does this have against end-to-end encrypted tools? Let’s take current top-of-the-line protocol, Axolotl and it’s implementation, TextSecure. By exploiting the end point and stealing the private keys before the intial key exchange, the HSA is able to peform an undetectable MITM attack against the users:

20 - TextSecure MITM with exfiltrated keysThe window of opportunity for this attack is very small for TextSecure. However, a persistent malware can exfiltrate messages directly from the device. This can be done at any time.

21 - TextSecure Keylogger

A smartphone is simply not a secure trusted computing base to perform encryption on. The next article will discuss how to secure the end points against exfiltration of keys and plaintexts.

WhatsApp vs TextSecure – a closer look at Axolotl

It was recently announced that WhatsApp is switching to end-to-end encryption. And not just any protocol, but Axolotl by Open Whisper Systems.

Axolotl and the implementations

Here’s how Axolotl is implemented in WhatsApp and TextSecure

15 - Axolotl WhatsApp Overview 16 - Axolotl TextSecure Overview

NOTE: The colors are used to simplify the ECDHE down to this color-mixture example, as math would take too much room.

Axolotl is an outstanding protocol that uses ratcheted Diffie-Hellman key exchanges. Instead of long term singing key, Axolotl initiates key exchange with three ECDH operations, one of which is done with long term DH private value (Identity key).

This way Axolotl provides forward secrecy while removing the need to advertise public DH values. Messages are encrypted with message keys derived from chain keys, that derive from constantly changing root key. Unlike OTR, same symmetric key is not used until the next DH handshake completes. Instead, the chain key is run through HMAC-SHA256 to ensure forward secrecy for each message separately. The recipient is able to regenerate all the message keys through cyclic hashing of current chain key, to decrypt any messages sent while their client was not replying.

The root key is renewed with cyclig hashing through HMAC-SHA256 and HKDF, together with entropy obtained from constant generation of DH shared secrets, so the users are able to retrace the trust of current root key back to initial key exchange.

The two implementations look almost completely identical. Can you spot the difference?

WhatsApp does it wrong

There is no fingerprint generation in WhatsApp. What this means is, WhatsApp provides almost as bad security as iMessage. Assuming WhatsApp did not make any other changes to Axolotl in their proprietary source code, an undetectable man-in-the-middle attack can be generated either from the backbone of Internet by HSA, or from within the WhatsApp server (compromised either with malware or NSL). Whether Axolotl uses additional TLS to protect DH-values between the client and server does not matter; PKI provides no meaningful protection against HSAs.

In iMessage, another public key can be added, or the current public key can be switched at any time, without the user noticing. In the case of WhatsApp, the MITM can only be done during the initial handshake (assuming the proprietary code does indeed pass the previous root key to cyclic hashing process).

Here’s how the MITM against WhatsApp works (only the key exchange part is shown):

17 - Axolotl WhatsApp MITMThe missing fingerprint feature makes WhatsApp completelty insecure. End-to-end encryption is defined usually as “user is in control of the encryption keys”. In the case of public key cryptography, it’s more complex. People seem to think it’s enough that they are in control of secrecy of their private decryption and signing keys.

What they don’t realize, is they must also be in control of integrity of all public encryption and signing keys of their peers. If this is not the case, i.e. if the user can not be sure he or she is encrypting with the correct key, the decryption key may not belong to their contact, but to a man in the middle.

If a secure messaging app prevents the user from verifying the public keys, the claim of the software being end-to-end encrypted is unarguably snake oil. The whole point of end-to-end encryption is to make it impossible for the server and any active MITM attacker to access messages. Companies will want to avoid legal issues so keep on eye for carefully worded comments such as “our company is not in possession of private keys of the users“.

TextSecure does it right

In the case of TextSecure, the users have a way to ensure they have actually received the public DH-value of their contact, by checking each other’s fingerprints:

fingerprintsHere’s how verifying the fingerprint in TextSecure detects the MITM attack:

18 - Axolotl TextSecure MITM

But what about convenience? Every once in a while I come across a comment like

People use [fingerprintless products] to make their lives a little bit easier, not harder by checking some set of numbers they don’t even know what it’s for.

This style of argumenting is just horrible. The decentralized security of E2EE absolutely depends on users verifying indentities. This is an inherent problem in cryptography and no application can fix it. Having a trusted third party who manages keys for you takes the end-to-end encryption factor from that software. This is just an example of a larger phenomenon:

Security is a process, not a product. Products provide some protection, but the only way to effectively do business in an insecure world is to put processes in place that recognize the inherent insecurity in the products.

Bruce Schneier

To be frank, having the tiny bit of convenience from not bothering to check keys, can result in very inconvenient jail time if unjust laws and/or paranoid nation states are able to access your communication. This is especially scary considering parallel contruction: you don’t have to be engaged in serious criminal activities for this type of surveillance to occur.

So, saying fingerprint checking is inconvenient is not only short-sighted, it’s also impolite: Just because you don’t mind sharing your private life and giving HSAs leverage that possibly prevents you from changing the world, doesn’t mean the contact doesn’t either.

Improving convenience

TextSecure already makes comparing fingerprints a breeze with QR-codes if the users have Barcode Scanner installed. Let’s take a closer look:

Currently, opening the first function (QR code or the scanner) from chat takes four taps. Then users have to perform first scan. Then users make three more taps to move to the opposite function, perform another scan, and then they do two more taps to get back to chat.

Here’s how OWS could significantly speed up the process even further:

Initially, TextSecure should come bundled with barcode scanner. The software already defines the Alice and Bob -roles for users by comparing which one has larger base key. The “End secure session” must be moved into the triple-dot menu at the top-right corner.

  1. Pressing the lock icon opens the barcode scanner for Bob, and QR-code for Alice.
  2. Scanning both fingerprints from same QR-code allows Bob to confirm both that
    1. Bob has received correct DH identity key from Alice
    2. Alice has received the correct DH identity key from Bob
  3. Once the key has been scanned, the scanning device will report either with “Identities verified” or “MITM risk“. The notification and QR code is closed with the second tap.

Here’s a mock up of how it would look

TextSecure New FingerprintsThis is less inconvenient than sending a picture from the phone. Currently the fastest QR-scan I managed to do with a friend took around 30 seconds. With my proposal, the process would appear to take way less than ten seconds.

Remote fingerprint verification

You have limited assurance when using the phone: HSAs have had 16 years to improve on this. If you’re a target, you might not be secure against such active voice-morphing attack.

Sometimes people send me fingerprints through TextSecure. This is a bad idea. Here’s how the MITM can trivially remove all assurance from this practice:

19 - Axolotl TextSecure verification through MITM

While you can agree on obscure tactics on how to stegano-graphically hide pieces of the fingerprint into messages, you need yet another MITM-free channel to do this. So instead of agreeing on these practices face-to-face, just exchange the fingerprint.

iMessage

Apple claims to be resisting a court order to decrypt iMessages in real time. They claim it’s a technical impossiblity. This claim deserves a closer inspection. Page 35 of Apple’s Security guide states the following:

Apple does not log messages or attachments, and their contents are protected by end-to-end encryption so no one but the sender and receiver can access them. Apple cannot decrypt the data.
iMessage connects to Apple’s IDS directory service (key distribution server) and Apple Push notification server (IM server) over TLS DHE. The following diagrams show how keys used in end-to-end encryption are exchanged:

1. IDS delivers client the public keys of contact
11 - iMessages KE overview2. APN delivers signed and encrypted messages:
12 - iMessages COM overview cropThe messages are encrypted with unique 128-bit AES keys, encrypted with 1280-bit RSA keys. Data is signed with 256-bit ECDSA (Nist P-256 curve). The 1280-bit RSA has the computational complexity of 289.46, which is way below current recommendation (2128). In fact, RSA-1280 is almost a hundred times weaker than legacy standard level of ECRYPT (296) evaluated to be secure until 2020. So there’s a possibility that the encryption keys are within computational reach of HSAs, if not now, perhaps in the near future.

What’s worse than crypto dragging behind standards, is it turns out the implementation of end-to-end-encryption (E2EE) in iMessage is insecure by design. Matthew Green recently wrote a great article about this, so I’m keeping the explanation short.

Above, the quote from Apple stated they “cannot decrypt data.” This is only a half-truth, as Apple decides what encryption keys you use to protect those keys, that are used to protect your messages. This is in direct conflict with Stef’s 7 rules to detect snake oil:
#4 “The user doesn’t generate, or exclusively own the private encryption keys”
I should note that iMessage is also in violation of
#1 “Not free software”
#3 “Runs on a smartphone”
#5 “There is no threat model”
#7 “Neglects general sad state of host security”
but the #4 is in my opinion the most important. User will have to rely on blind faith, that the client is using the actual public encryption key of the contact to encrypt symmetric key, and also that the public ECDSA signature verification key belongs to the contact. Under an NSL, Apple might be coerced into performing the following attack:

1. Server is coerced to send public encryption and signing keys of attacker instead of contact’s.
13 - iMessages KE server compromise2. Once the public keys have been distributed, here’s how the MITM attack takes place when Alice sends message to Bob (the reply from Bob is the exact mirror of this process, so it was left out from the already huge scene):
14 - iMessages communication server compromise crop
I should also mention, that a DHE MITM with the server or CA private key allows the HSA to pose as the IDS and APN servers. So iMessage users might be under MITM attack by HSAs even if they had ever approached Apple.

The standard approach to solve this simple MITM attack is to compare the fingerprint of public signing key via off-band channel, where the intergrity of data is guaranteed. iMessage does not have the fingerprint checking feature, so there’s no way to detect MITM attacks.

tl;dr? Vote with your feet.

MITM attacks with CA private key

There are tons of messaging tools and servers out there, each with their own private keys. Stealing the private key from every server takes effort. After Snowden leaks, there has been a push towards DHE so mass surveillance with passive decryption is slowly depricating. With DHE, a MITM attack is required every time. Is there an easier way to make undetectable MITM attacks against any service? Of course there is.

A government agency can either compel a certificate authority (CA) to issue false certificate, or request their private key to generate as many false certificates as desired. Does the surveillance equipment exist? Yes:

pfHas this practice been documented? It appears so:

It has been argued that issuing a subpoena for CA private keys is not needed, because a browser would trust certificate signed by any CA, Turkish Government for example. However, all it takes to detect the attack is clicking the lock icon in address bar.

CAU

Guess which one is the rogue certificate.

So it’s clear that the extremely risk averse HSAs only want to use the original CA to sign their false certificate / key of the original CA to remain undetectable. Once the false certificate has been created, here’s how the MITM attack works when server uses–

RSA key exchange:

9 - TLS RSA MITM CA keyDHE key exchange:

10 - TLS DHE MITM CA key
Yes. Google Chrome would detect if the pre-installed Google’s certificate suddenly changed. A browser however does not contain the certificates for every website, otherwise we would not need CAs.

Chrome’s installer is also signed by a key, which is in turn signed by a CA. Your operating system would not detect malicious version of Chrome that HSA has signed with CA private key.

Additionally, every time you open a Chromium installer, you completely bypass security provided by CAs by executing binary from “Unknown Publisher”:
unknown publisher
To summarize:

Private messaging with TLS provides no expectation of privacy against HSAs. There are of course many other bad actors who try to obtain credit card data etc. so using TLS works when there is commercial interest for trust between user and server. However, in private messaging, the server is always an untrustworthy man in the middle. We need something more private; End-to-end encryption.

MITM Attacks with server private key

As discussed in the section about passive RSA decryption, keys can be obtained from server either with NSL or malware. RSA keys not only make passive decryption trivial, they also enable completely invisible MITM attacks. Here’s how the attack works against key exchange protocols:

RSA:

7 - TLS RSA MITM Server key

DHE:

8 - TLS DHE MITM Server keySince DHE is anonymous key exchange protocol, the only way for client to know that the DHE parameters come from the server is the signature of DH-values, verified using the CA-signed, public signing key of server. The client is unable to detect MITM attack, if the HSA signs it’s DH values using the private signing key of the server.

There exists no way for client (not even certificate pinning) to detect a MITM based on stolen server private key.

TLS – Passive tap with server key

In the wake of Snowden leaks, warrant canaries are becoming popular. This means it’s likely that the server-side IM logs will be obtained with malware, rather than NSLs. Such illegally obtained conversations are example of parallel construction, which is not part of rechtstaat (the doctrine lacks English translation), a prerequisite of liberal democracy.

Some privacy-conscious services do not actively store logs about users on their servers. Installing peristent malware that stores and exfiltrates data periodically is risky for HSA.  Instead, malware might go after the private key of server. This process is very likely to be extremely covert, and apart from rare cases such as recent report on cryptome, very little is known about it.

In the case of Lavabit, the HSA used legal means to obtain the RSA decryption key.
Lavabit key order

Source (page 36)

It’s impossible to say whether the NSL was just to make data, already decrypted with stolen key, legal evidence. Once the HSA was in possession of Lavabit’s private key, it could decrypt all past and future messages between users and server: this requires only the encrypted data and the handshake (client random, server random, and PMS). The PMS is always encrypted using the same RSA public key. Losing control  of the private key ruined the security of Lavabit, so they did the right thing and closed their service.

5 - TLS RSA Passive tap with server keyHow effective is this attack? According to GCHQ, at least as late as 2012, 90% of servers used RSA key exchange. This makes NSA’s UPSTREAM and GCHQ’s TEMPORA very efficient against naive (=RSA based) TLS.

FlyingPig

Slide leaked by whistleblower Edward Snowden

The solution to make passive decryption impossible is to switch the key exchange protocol from RSA to DHE.

6 - TLS DHE Passive tap with server keyIn DHE, the server does not have a long term RSA decryption key, that can retrospectively decrypt encrypted PMSs. In fact, the PMS is never transmitted over the wire. It is instead derived by combining the other party’s DH public value with personal DH private value.

The private values stay on their own devices until the end of session, after which all DH values along with the PMS are destroyed by both the client and the server. Destruction of this data prevents retrospective decryption of ciphertexts. This is called forward secrecy. The only long term keys server stores are used for signing: during DHE they authenticate that the ephemeral PMS is derived with the server, instead of man-in-the-middle. The main problem, namely the fact that the server has access to decrypted message at one point, remains.

TLS – Server logfile compromise

Whenever you use TLS encrypted messaging tools

  • AOL Instant messenger
  • Blackberry Messenger
  • Ebuddy
  • Facebook Messages
  • QQ
  • Skype
  • SnapChat
  • Telegram Standard chats
  • Viber
  • Virtru
  • WhatsApp (old protocol)
  • XMPP-servers
  • Yahoo Messenger

there’s a “trusted” man in the middle. The trust in this context is “trust us, or don’t use our services”. Data aggregation is real; Facebook is being sued over analyzing private conversations, and the best quote about trustworthiness of server seeing messages comes from Mark Zuckerberg, the CEO of Facebook:

Zuck: They “trust me”
Zuck: Dumb fucks.

Source

What’s worse, the five applications colored red provide content and metadata to NSA via FBI.

Slide leaked by whistleblower Edward Snowden.

Slide leaked by whistleblower Edward Snowden

The following diagram illustrates how subpoenas, NSLs and PRISM bypass TLS encryption completely. Other HSAs most likely have similar access to their domestic companies, and since many times HSAs co-operate, obtaining data from servers in allied nations is trivial.

In the cases where server is in neutral or hostile country, HSAs may compromise the server with malware to steal log files in real time or periodically.

TLS RSA Server log exfiltration

TLS – Overview

The most common encryption online is called TLS (SSL). TLS uses either RSA or DHE key exchange:

RSA

2 - TLS RSA

1. The server creates an RSA keypair, and submits the public encryption key along with information about the server’s identity to Certificate Authority (CA). The CA then signs the SHA256 hash of submitted information and public key, and returns new certificate to server. Operating Systems, browsers and clients trust data signed by CAs implicitly; The public signing keys of CAs come preinstalled in client-side software and operating systems.

2. The Client and Server exchange random data (client random, server random) during the initial handshake.

3. Server sends the client it’s CA-signed certificate, that contains the server’s public encryption key. The client authenticates received data by hashing it with SHA256, and by comparing the result with hash obtained by decrypting the CA’s signature using the CA’s public signing key.

4. Client generates a pre-master-secret (PMS) and sends it to server, encrypted using the public encryption key of the server. The server decrypts the PMS using it’s private decryption key.

5. Client and server create master secret from PMS, client random and server random, and use the master secret to generate two AES keys.

6. Client and server use the two AES keys to exchange encrypted messages.

HSA is unable to decrypt messages passively, because the PMS from which AES keys are generated, can only be decrypted with the Server’s private key. Since the encryption only happens between Alice and server, and between Bob and server, the server can observe, store and edit messages during transmission.

DHE

3 - TLS DHE

1. The server creates an RSA keypair, and submits the public signing key along with information about the server’s identity to Certificate Authority (CA). The CA then signs the SHA256 hash of submitted information and public key, and returns new certificate to server.

2. The client and server exchange random data (client random, server random) during the initial handshake.

3. Server sends the client it’s certificate along with it’s public signature verification key. The client authenticates received certificate and public key using the public signing key of CA.

4. Server generates primitive root g, prime p and private DH-value s, and calculates DH-public value gs mod p.

5. Server sends the client g, pgs mod p, client random and server random, along with the SHA256 hash of these values. The hash is signed using the private key of the server.

6. Client authenticates g, p, gs mod p using the public signing key of server, received in step 3.

7. Client generates private DH-value a and using it together with g and p, calculates DH-public value ga mod p.

8. Client derives shared secret (PMS) by calculating (gs mod p)a mod p.

9. Server derives shared secret (PMS) by calculating (gamod p)s mod p.

10. Client and server derive master secret from PMS, client random and server random, and use the master secret to generate two AES keys.

11. Client and server use the two AES keys to exchange encrypted messages.

HSA is unable to decrypt messages passively, as PMS is derived from public DH-keys using private DH-keys; Private DH-keys are never transmitted over the wire, and they are hard to calculate from public values. Alas, the encryption again only happens between Alice and server, and between Bob and server: the server can observe, store and edit messages during transmission.

Secure messaging should never rely on servers handling unencrypted data securely; There is no trustworthy man-in-the-middle; Users will have to rely on blind faith, that the gratis service isn’t funded by selling private user data.