Ricardo's notes

Man in the Middle attack vs. Cloudflare's Universal SSL

Man-in-the middle attack

MitM attacks are a class of security attacks that involve the compromise of the authentication of a secure connection. In essence, an attacker builds a transparent tunnel between the client and the server, but makes sure that the client negotiates the secure connection with the attacker, instead of the intended server. Thus the client instead of having a secure connection to the server, has a secure connection to the attacker, which in turn has set up its own secure connection to the server, so that it can relay messages from the server to the client without raising suspicion while eavesdropping the traffic flow.

The reader may ask, how do clients know if they are talking with the intended server then? In fact, as cryptographically secure as the established channel might be, no matter how difficult to crack the encryption, if not established correctly, the channel isn't secure at all.

The trust model used on the Web currently involves the generation of certificates, which are signed by audited companies called Certification Authorities (CAs) and are trusted by your browser as soon as you install it. If you were to sign your own certificate, most browsers would complain that the certificate isn't trusted, in the sense that, while it might be possible to establish a secure connection, no audited entity has vouched the server you've reached.

Illustration of man-in-the-middle attack
By Miraceti (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons

Cloudflare's Universal SSL

The easy to configure Universal SSL can work in several ways:

  • Flexible SSL: The word "flexible" here is in the sense that it "bends" the idea of SSL. Only the connection between the user and Cloudflare is secure, thus websites set up in this way will always have their traffic sent in cleartext in the last mile (ie, between Cloudflare and your server). The advantage of having your traffic encrypted isn't clear to me, since it is only encrypted half of the way. The channel is only as secure as its weakest link, thus, it's exactly as secure as cleartext (ie, no encryption at all). Waste of computer cycles. This will work though if you intend to deceive your users into a false sense of security.
  • Full SSL: That's regular SSL but with Cloudflare in the middle, with your private key. Is this "full", given they insist in looking at everything that goes between your users and your website?
  • Strict Full SSL: This is exactly the same, but you paid for a trusted certificate signed by a CA. This translates to no actual security improvement compared to Full SSL, because if you own the Cloudflare account, you certainly gave them a SSL certificate you trust to your server, and Cloudflare will automatically refuse any certificates but that one.
  • Keyless SSL: Well, the name isn't what I would consider deceiving this time. They will still read your traffic but without actually having your keys.

Errata: In fact, Full SSL does not even validate your self-signed certificate (no certificate pinning, fingerprint check or anything of that sort). To prevent a MitM attack effectively, you'd need to, ironically, pay for regular CA-issued certificate in Strict Full SSL. So, unfortunately, Strict Full SSL does translate in an improvement compared to Full SSL, since the latter is weaker than I thought.

For comparision, I will leave their image:


Source: Cloudflare, https://www.cloudflare.com/ssl

The problem with Flexible SSL

Flexible SSL makes it easy to create a secure connection and have it mean nothing. Do you need a trusted certificate for your latest phishing scheme? Just host it regularly on your insecure server and set it up on Cloudflare: that padlock might just seal the deal to the distracted user. I'm not giving the reader a brilliant criminal idea, I'm sure this is rather obvious to any serious cybercriminal that creates those realistic website copies and the appealing emails that lead people to them - they have been trying to emulate the security features of real websites, but setting up trusted SSL has been a challenge. Now SSL is within their reach, even without the minimum knowledge on how to configure SSL servers.

It subverts the idea of a secure channel, because it is not secure by any reasonable definition, given the data is transmitted in the clear at some point through the public internet; the idea of authentication, given you no longer are interacting with the websites' actual servers; and the idea of trust, since thousands of bogus certificates emitted this way will not ensure users' security, leading me to distrust the trust model of the entire Web. That's pretty severe right there.

Giving everyone access to SSL

I'm all for the proliferation of SSL, and security is indeed too difficult for the average webmaster to figure out. This means, unfortunately, that some websites that ask for your private data send it in the clear. Certainly SSL for everybody is much better?

I'd argue that not really. Not only does it empower anyone to create malicious websites (see above) but it empowers people who don't know security to do it badly. And by making Flexible SSL available, the easiest and default option is just that.

Do you trust Cloudflare entirely?

Enabling Universal SSL gives your users a sense of security: that the data they are sending is protected from the preying eyes of attackers. Remember though, in this setup, Cloudflare has access to the entire datastream in cleartext, thus your transmission is only as secure as Cloudflare's infrastructure: one zero-day exploit is all it takes to read traffic of potentially millions of websites with a single attack (this means it could take more than one attack, but certainly not proportional to the number of websites affected, in the sense that a single Cloudflare endpoint mediates traffic to multiple websites).

Thankfully, in this regard, Cloudflare probably has better odds of tackling zero-day vulnerabilities than you do, given its position in the industry and access to unreleased exploits. On the other hand, the odds your website will be attacked increase when you're using a popular platform like this one, since one attack to the platform is an indirect attack to you.

Though, one thing is for sure, if you're trying to protect yourself from government-mandated spying in light of recent news on mass surveillance, you probably can only trust your own server, and not that much. Adding an intermediary effectively adds yet another point of attack. Worse, Cloudflare is an attractive one at that given the sheer size of traffic flowing through it, especially if, suppose, a government agency could convince the company to cooperate with their alleged spying efforts.

An actual solution

Starting a free Certificate Authority would certainly help the Web become more secure. There have been efforts to do this, but the money hasn't been raised to properly audit these CAs (eg, CACert) and thus they aren't trusted by any wildly distributed browser.

These free CAs could tackle phishing because:

  • they would require traditional SSL setup, which is still the only way of creating a truly secure channel;
  • this setup requires effort and knowledge that your average cybercriminal (or security-oblivious webmaster) isn't willing to spend time with;
  • they could provide automated revocation of certificates given to domains reported as phishing.

Other measures could be created to properly tackle the trust issue, including creating a quasi-free model in which you could be asked to authorize a very small amount on a valid payment method. Only after ensuring that the charge is not refunded in a period of a few days, which is likely in stolen credit cards, can the user create a certificate. Such a measure could discouraging creation of bogus certificates and possibly provide identity records of abusing phishers.

The nail in the coffin

... and more to come.

Disclaimer: I use the regular Cloudflare (without their free SSL).

Using fail2ban to monitor and stop common attacks

Fail2ban is an awesome tool to automatically monitor your log files for suspicious activity and ban offending IP addresses. It automatically scans your logs, matches warnings with regex expressions and bans using iptables.

To install, in Debian-based distributions (including Ubuntu derivatives):

sudo apt-get install fail2ban

To configure, first change to the settings directory:

cd /etc/fail2ban/

When listing the directory, you should get the following output:

$ ls -ls
total 36
4 drwx 2 root root action.d
4 -rw- 1 root root fail2ban.conf
4 drwx 2 root root fail2ban.d
4 drwx 2 root root filter.d
16-rw- 1 *    *    jail.conf
4 drwx 2 root root jail.d

A small description of each file and what it does:

  • fail2ban.conf sets the general options, and the default should be fine;
  • jail.conf sets the options of each log file monitor, which is called a "jail".
  • jail.d/ is the folder that contains the settings for each "jail" if you choose to separate them instead of listing all in jail.conf;
  • filter.d/ is the folder that contains the rules that scan the logfiles.
  • action.d/ is the folder that contains the possible actions to execute when a jail is triggered.

Usually, it is recommended to make a copy of jail.conf to jail.local as a local settings file, and edit the .local one:

cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Then, we should enable some of the default jails included, such as ssh and ssh-ddos. You will find several examples in the default configuration file, and to enable them, just set enabled=true in each jail. You can also set the actions (usually it consists in blocking using iptables) and set the ban times.

For the changes to take effect, restart the service:

sudo service fail2ban restart

You can create your own filters so that you can monitor your own applications' log files. To do this, add your .conf file to filter.d and reference it by its name (without the file extension .conf) in your jail configuration file, setting the actions and the ban time. You can look at other filters to get a sense on the syntax, it is fairly basic, with a regex expression for including IPs and a regex expression to ignore them. You can test your filters with the command:

fail2ban-regex /path/to/logfile /etc/fail2ban/filter.d/myfilter.conf <yourignoreregex>

Your ignore regex will have to be specified in the command.

Now probably, the most important part, how to unban yourself (if you did this, it will happen, trust me). First, start by listing the iptable's rules:

sudo iptables -L --line-numbers

Find the number of the rule that is responsible for locking you out, and the name of the chain (usually the same name as the jail but prefixed by fail2ban-) and delete it:

sudo iptables -D <chain> <line number>

There's a special jail that seems attractive to enable but requires care: if you enabled the recidive jail, which bans repeat offenders, you may want to avoid any mistakes that can ban you too often, since fail2ban keeps tabs of repeating IPs on its on log, in /var/log/fail2ban.log. Thus, the recidive jail survives restarts, and this jail will ban you from all communication to the server until the ban time expires. Think about that and put and adequate ban time in your configuration, or disable it if you don't like its aggressiveness. Other jails continue to protect your system even if you choose to disable recidive.

Scoring A+ 100/100/100/100 on SSL Labs

On one of these days, I set myself to try to implement what SSL Labs deems the best standard of security in SSL (or should I say, TLS).


I probably now have a negligible increase in security compared to a traditional proper server configuration, considering that it mostly differs from standard SSL configurations by virtue of increased key sizes and enforcing the newer TLS 1.2 (by the way, while annoyingly breaking compatibility with most Android devices).

At this point, any failures in the software implementation of the cryptography algorithms or the protocols (i.e. OpenSSL's current coding standards), problems in end-user devices, server misconfigurations or the cryptographic algorithms themselves are more likely to corrupt the security model. One could say this is an exercise in futility, but it's a fun experience and one can certainly take at least some of the precautions here listed to improve security.

  • Ensure updated software


The first thing we should do is ensure we have the latest version of our web server software and cryptography suite. I will assume usage of nginx for the web server daemon and OpenSSL as our cryptographic suite. SSL Labs applies big score cuts when known widespread (or severe) vulnerabilities are found, and rightfully so, however it is not exhaustive in the search of vulnerabilities. So, even with very big key sizes, all of the encryption can fail simply due to outdated software, so this is of the uttermost importance.

  • Use a strong certificate


When creating your CSR, it's good practice to use keys significantly bigger than the Lenstra / Verheul recommendation[1], which is in practice what most organizations recommend (NIST, ECRYPT, ...)

For 2014, in asymmetric cryptography, 1216-1562 bits is the recommendation, providing  ~80 bits of security. Accordingly, most organizations call for 2048 RSA keys as the acceptable minimum, which is the next round key size.

For reference, in 2002, the lower bound of Lenstra / Verheul was 768 bits. 7 years later, that key size was factored.

Using a 2048 bit key in RSA should be good until year 2023, however, for the purposes of this exercise, we will need 4096 bit keys, which should last until year 2049 using the same criteria. Of course, these guessestimated dates are very hypothetical and the method used to generate them does not contemplate the possibility of quantum computer development.

We should also throw in SHA-256 for the hash, which is in line with the same recommendations (162 bits for 2014).

Note: If you are generating your own self-signed certificate, be sure to pin it to your browser to ensure proper authentication of your website. Trustworthy certificates from known CAs are one of the criteria of SSL Labs' tests, you can use StartSSL for this but bear in mind that revocations are not free.

On your terminal:

openssl genrsa -out ~/domain.com.ssl/domain.com.key 4096
openssl req -new -sha256 -key ~/domain.com.ssl/domain.com.key -out ~/domain.com.ssl/domain.com.csr
  • Use only strong protocols


SSL 3 is no longer considered safe: if your website accepts it, consider disabling it. Almost all clients are compatible with TLS 1.0, which may still be affected by some client-side vulnerabilities (i.e. the BEAST attack) but should be marginally more secure than SSL 3. You should then enable all the protocols from TLS 1.0 up to TLS 1.2 and expect the client to negotiate the best option. That's the reasonable recommendation.

Now, for passing this test with 100% on Protocol Support, we need to enable TLS 1.2 only. This will allow us to negotiate by default the best ciphers as well.

nginx config:

ssl_protocols TLSv1.2;
  • Set up your own Diffie-Hellman parameters

By default, nginx ships with 1024 bit parameters for DH. I'd recommend 2048 bits, but to achieve our goal, we need a 4096 bits prime. To enable better security, we should either use trusted primes from official sources (sometimes these don't generate the entire cyclic group to avoid leaking the last bit) or generate our own (will take a long time even on good computers, at least 30 mins in my experience).

To generate parameters, on your terminal:

openssl dhparam -out dhparams.pem 4096

and after generating them, on your nginx config:

ssl_dhparam /path/to/dhparams.pem;

These parameters don't need to be private, since they are publicly disclosed every time you make a DH key exchange. However, they should come from a trusted source because, if you obtain them from an attacker, they may not be primes at all (!) or may be weakened in some other way.

  • Choose your cipher order and enforce it


With the latest version of OpenSSL and nginx, the default cipher list should generate very good protection and is my general recommendation. In nginx:

ssl_ciphers "HIGH:!aNULL:!MD5:!3DES";

The default ciphers enabled provide strong forward secrecy, where a temporary key is derived at each connection, adequate levels of security and proper cipher order. If you see RC4 ("arc four") in your cipher list, you will need to update your OpenSSL and nginx installations, since there have been some troubling discoveries about RC4's security.

Note that in this case, we only want ciphers with symmetric encryption of 256 bit key size. We will need to disable AES-128 and some ciphers using the CAMELLIA encryption.

ssl_ciphers "HIGH:!aNULL:!MD5:!3DES:!CAMELLIA:!AES128";

And finally, we need to ensure that the order is respected, as given by the server. This is also a good general recommendation, especially when combined with a good cipher list.

ssl_prefer_server_ciphers on;
  • Use HSTS


You may already redirect your secure domain (or subdomain) to a HTTPS address, however, by mistake or as part of an attack, your client may, for example, POST to an HTTP address instead of an HTTPS one, as you would have wanted. To make sure the browser sets that subdomain as HTTPS-only, you can use the following nginx configuration line:

add_header Strict-Transport-Security max-age=31536000;

This will make sure that the subdomain is always reached through a secure connection. This header should be served in an HTTPS connection in the first place.

You should also add the following header:

add_header X-Frame-Options DENY;

to make sure that the website cannot be placed inside an iframe.

  • Some interesting final recommendations


Recently, I've noticed that Chrome does not check validation of certificates using OCSP. It instead keeps a database of revoked certificates. There are currently several problems with certificate revocation systems:

  1. Keeping a database of all revoked certificates means having to download a possibly outdated large file (CRL).
  2. Keeping a system where you can query certificate hashes for revocation information means that every time you access a website, you are pinging the revocation system. Worse, sometimes these systems are overloaded and cannot respond, in which case the client assumes the certificate is valid - thus enabling an attacker to simply block any client requests to make him believe the certificates are good (traditional OCSP).

Now, this can be solved using a very clever idea. First, consider that OCSP responses are cryptographically signed and timestamped, so they cannot be forged. So, if we cache a valid OCSP response on our server and serve it to our clients, they know it is a valid certificate and they are not pinging the revocation system, thus not telling it what websites we visit. We can enable this on nginx:

ssl_stapling on;

Finally to increase performance, consider enabling caching:

ssl_session_cache shared:SSL:5m;
ssl_session_timeout 5m;

[1] http://www.keylength.com


Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php5) in Unknown on line 0