Current state of TLS

Qualy’s SSL test is an extremely useful tool when evaluating security of a specific web domain, but it doesn’t provide statistics. Those that do, don’t provide a database from which to parse information on TLS configurations. GCHQ has program called FLYING PIG that follows trends of TLS. We need an open source version of that.

I found a great tool to scan domains with: CipherScan.
Huge applause to Julien Vehent et. al. for their effort.

I wrote a quick script ( that converts Alexa’s top 1M domains csv file to simple list while preserving order. It also removes any duplicates and paths that have made it to the original list. I then wrote another script ( that makes sure there are N instances of CipherScan analysing different domain at a time. These aren’t my finest Python but they get the job done.

Here’s the compressed file of everything you need to start analysing web-domains: files.tar.gz (SHA256 67dee99416c055f80abfccdecb11ad1acdb22ed05fb570f3fea85e2672113061)

Here are the top 1M domains analysed (I hereby place the dump into the public domain). dump.tar.gz (SHA256 e4889418993a3d34d7e362d4cec024437aa62d8c47347f7c7ca45b06419a5f0f)

Now for the analysis; Good ones first. Here are the domains that support best cipher-suites:
1. ECDHE_RSA_CHACHA20_POLY1305 (20 685)
2. ECDHE_ECDSA_AES256_GCM_SHA384 (32 375)
3. ECDHE_RSA_AES256_GCM_SHA384 (327 758)
ECDHE_ECDSA_AES128_GCM_SHA256 (32 380)
5. ECDHE_RSA_AES128_GCM_SHA256 (328 439)

Being on this list isn’t enough. Just because cutting edge crypto is supported, doesn’t mean client and server actually use it. Not all clients support latest cipher suites, so servers want to retain backwards compatibility to ensure users can connect to the web page. This causes problems, as MITM can perform a TLS downgrade attack:

Alice to MITM:  I know AES, RC4 (~broken), RC2 (broken).
MITM to server: I know RC4 and RC2.
Server to MITM: The strongest common cipher is RC4, use that.
MITM to Alice:  The strongest common cipher is RC4, use that.

Server makes the final choice based on client’s proposals, and the client will either accept, or drop the handshake.

Users can configure their browser not to accept insecure ciphers, but not everyone has the skills to do that. The only way to pressure developers to update their code and users to update their clients is to stop supporting old crypto. Just as we need to know which companies send passwords to users over unencrypted email, we need to know which companies leave all their customers vulnerable, so that a minority of users with their “Netscape browsers” wouldn’t get upset, when they need to update. Based on Alexa’s top ~1M domains, I created a set of lists of those that support a weak configuration.

No encryption:
Not available (420 042)
Not working (164 388)

Weak key exchange:
RSA-512 (34 168)
RSA-1024 (4 587)
DHE-512 (26 854)
DHE-1024 (233 043)

Weak symmetric ciphers:
RC2 (34 866)
RC4 (292 561)
DES (61 692)
3DES (499 522)

Weak MAC:
SHA1 (567 302)
MD5 (212 612)

Weak protocol:
SSL2 (35 243)
SSL3 (678 079)
TLS1 (565 269)
TLS1.1 (976 863)

Weak Certificate:
RSA-1024 signature (46)
DSS signature (14)
SHA1 fingerprint (175 942)
MD5 fingerprint (4 689)


6 thoughts on “Current state of TLS

  1. Good article. Just a quick rant about the security theater of current problems with TLS or any encryption is having crypto is not good enough because the implementations are flawed and robust key management don’t really exist. A good amount of these websites are running TLS without proper security on key materials. It is as good as putting the keys where the thieves can reach (RAM memory) through the huge holes in the window grills for the key on the table. TLS deployment is just for compliance or a security theater at best. Current state of dragnet CNE exploits are getting worse.


    • “implementations are flawed and robust key management don’t really exist.”
      The implementations are slowly getting there with every update to TLS; Backwards compatibility is the big problem. Key management’s robustness depends on what your threat model is: Hackers? Local governments with no access to foreign servers/CA? FVEY with much broader arsenal and access? The capabilities of attackers vary, so incremental security TLS provides might be enough under some threat models.

      “Current state of dragnet CNE exploits are getting worse.”
      Bulk CNE / panopticon where you have to assume your end point can be compromised at any given time, is part of mass surveillance practised by governments (FVEY foremostly) either already, or in the near future. We can’t thus talk about PKI-based TLS in terms of being secure against mass surveillance. Hardening the endpoint increases the risk on attacker’s behalf, and if Snowden is right, it might be that defenders are slowly catching up. But many times people need high assurance. As I’ve discussed in this blog, TLS isn’t secure against mass surveillance as it requires a trusted third party. If you need more security, you need PGP web-of-trust for code signing and something like TFC or similar to secure endpoints.


      • I would say attacks are getting cheaper (quoting @Bruce Schneier and the usual suspects in the Bruce commentary) to the point just about a medium skilled hacker can get into a server and start to sieve through RAM memory content for key mats. We have to assume medium-sized CNE is considerable for a rather motivated group of organized crime or partial state sponsored actors to pull off something like that like the usual Javascript/Java/Flash insertion of payload into RAM and defeat protective measures to scrape memory and exflitrating it (on a wider scale).

        TLS is indeed a fragile eco-system and things like Convergence (distributed CA) and CONIKS try to get the attestation of PK but that is just half with the other half in higher assurance deployments (at least leveraging a properly configured and managed HSM for PKI stuff).

        TFC can be a little tricky on automated systems like secure web servers with full automation since TFC is designed for a human to transfer data between Tx/Rx in a more manual (and secure) fashion.



      • “I would say attacks are getting cheaper”
        I would love any sources and arguments the community has on this in terms of governments. With the current political atmosphere it’s at least likely the “cyber” budget is increasing, meaning more exploits can be obtained. Exploits are many times sold exclusively so I’d expect rise in prices as the global demand grows. Organized crime likely has funds to purchase exploits so you’re probably right about less-hardened serves being targeted just because they can be.

        CONIKS still has the problem of continuous automated MITM attack against endpoints, meaning user will never find out whether the fingerprint of client / some server has changed. We need some type of off-band verification for key directory-servers as well. I’ll look into Convergence when I have time. Thanks for posting the links.

        “TFC can be a little tricky on automated systems”
        It’s unlikely we can ever see a perfect guard that can provably protect the loopback from RxM to TxM so it’s unlikely you can implement a server with the equivalent security. If the server belongs to user, there is high assurance, off-band access; If it’s not, the server belongs to untrusted third party who’s not going to setup TFC with you, maybe. I could very well imagine you have a personal account manager at Swiss bank who will do transactions for you based on anonymous and secure requests through TFC.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s