On one of these days, I set myself to try to implement what SSL Labs deems the best standard of security in SSL (or should I say, TLS).
I probably now have a negligible increase in security compared to a traditional proper server configuration, considering that it mostly differs from standard SSL configurations by virtue of increased key sizes and enforcing the newer TLS 1.2 (by the way, while annoyingly breaking compatibility with most Android devices).
At this point, any failures in the software implementation of the cryptography algorithms or the protocols (i.e. OpenSSL's current coding standards), problems in end-user devices, server misconfigurations or the cryptographic algorithms themselves are more likely to corrupt the security model. One could say this is an exercise in futility, but it's a fun experience and one can certainly take at least some of the precautions here listed to improve security.
- Ensure updated software
The first thing we should do is ensure we have the latest version of our web server software and cryptography suite. I will assume usage of nginx for the web server daemon and OpenSSL as our cryptographic suite. SSL Labs applies big score cuts when known widespread (or severe) vulnerabilities are found, and rightfully so, however it is not exhaustive in the search of vulnerabilities. So, even with very big key sizes, all of the encryption can fail simply due to outdated software, so this is of the uttermost importance.
- Use a strong certificate
When creating your CSR, it's good practice to use keys significantly bigger than the Lenstra / Verheul recommendation, which is in practice what most organizations recommend (NIST, ECRYPT, ...)
For 2014, in asymmetric cryptography, 1216-1562 bits is the recommendation, providing ~80 bits of security. Accordingly, most organizations call for 2048 RSA keys as the acceptable minimum, which is the next round key size.
For reference, in 2002, the lower bound of Lenstra / Verheul was 768 bits. 7 years later, that key size was factored.
Using a 2048 bit key in RSA should be good until year 2023, however, for the purposes of this exercise, we will need 4096 bit keys, which should last until year 2049 using the same criteria. Of course, these guessestimated dates are very hypothetical and the method used to generate them does not contemplate the possibility of quantum computer development.
We should also throw in SHA-256 for the hash, which is in line with the same recommendations (162 bits for 2014).
Note: If you are generating your own self-signed certificate, be sure to pin it to your browser to ensure proper authentication of your website. Trustworthy certificates from known CAs are one of the criteria of SSL Labs' tests, you can use StartSSL for this but bear in mind that revocations are not free.
On your terminal:
openssl genrsa -out ~/domain.com.ssl/domain.com.key 4096
openssl req -new -sha256 -key ~/domain.com.ssl/domain.com.key -out ~/domain.com.ssl/domain.com.csr
- Use only strong protocols
SSL 3 is no longer considered safe: if your website accepts it, consider disabling it. Almost all clients are compatible with TLS 1.0, which may still be affected by some client-side vulnerabilities (i.e. the BEAST attack) but should be marginally more secure than SSL 3. You should then enable all the protocols from TLS 1.0 up to TLS 1.2 and expect the client to negotiate the best option. That's the reasonable recommendation.
Now, for passing this test with 100% on Protocol Support, we need to enable TLS 1.2 only. This will allow us to negotiate by default the best ciphers as well.
- Set up your own Diffie-Hellman parameters
By default, nginx ships with 1024 bit parameters for DH. I'd recommend 2048 bits, but to achieve our goal, we need a 4096 bits prime. To enable better security, we should either use trusted primes from official sources (sometimes these don't generate the entire cyclic group to avoid leaking the last bit) or generate our own (will take a long time even on good computers, at least 30 mins in my experience).
To generate parameters, on your terminal:
openssl dhparam -out dhparams.pem 4096
and after generating them, on your nginx config:
These parameters don't need to be private, since they are publicly disclosed every time you make a DH key exchange. However, they should come from a trusted source because, if you obtain them from an attacker, they may not be primes at all (!) or may be weakened in some other way.
- Choose your cipher order and enforce it
With the latest version of OpenSSL and nginx, the default cipher list should generate very good protection and is my general recommendation. In nginx:
The default ciphers enabled provide strong forward secrecy, where a temporary key is derived at each connection, adequate levels of security and proper cipher order. If you see RC4 ("arc four") in your cipher list, you will need to update your OpenSSL and nginx installations, since there have been some troubling discoveries about RC4's security.
Note that in this case, we only want ciphers with symmetric encryption of 256 bit key size. We will need to disable AES-128 and some ciphers using the CAMELLIA encryption.
And finally, we need to ensure that the order is respected, as given by the server. This is also a good general recommendation, especially when combined with a good cipher list.
- Use HSTS
You may already redirect your secure domain (or subdomain) to a HTTPS address, however, by mistake or as part of an attack, your client may, for example, POST to an HTTP address instead of an HTTPS one, as you would have wanted. To make sure the browser sets that subdomain as HTTPS-only, you can use the following nginx configuration line:
add_header Strict-Transport-Security max-age=31536000;
This will make sure that the subdomain is always reached through a secure connection. This header should be served in an HTTPS connection in the first place.
You should also add the following header:
add_header X-Frame-Options DENY;
to make sure that the website cannot be placed inside an iframe.
- Some interesting final recommendations
Recently, I've noticed that Chrome does not check validation of certificates using OCSP. It instead keeps a database of revoked certificates. There are currently several problems with certificate revocation systems:
- Keeping a database of all revoked certificates means having to download a possibly outdated large file (CRL).
- Keeping a system where you can query certificate hashes for revocation information means that every time you access a website, you are pinging the revocation system. Worse, sometimes these systems are overloaded and cannot respond, in which case the client assumes the certificate is valid - thus enabling an attacker to simply block any client requests to make him believe the certificates are good (traditional OCSP).
Now, this can be solved using a very clever idea. First, consider that OCSP responses are cryptographically signed and timestamped, so they cannot be forged. So, if we cache a valid OCSP response on our server and serve it to our clients, they know it is a valid certificate and they are not pinging the revocation system, thus not telling it what websites we visit. We can enable this on nginx:
ssl_stapling on; resolver 22.214.171.124;
Finally to increase performance, consider enabling caching:
ssl_session_cache shared:SSL:5m; ssl_session_timeout 5m;