Firefox and Self-Signed Certs
Version 1.01 - 2010-11-04
In summer 2008 there was a lot of discussion on the topic of Firefox's warnings about SSL certificates which are invalid or not signed by a known authority ("self-signed"). Comment, offering various levels of support or criticism, came from Lauren Weinstein, Robert Accettura, Slashdot (twice), PC World, Nat Tuck, BetaNews, and Pingdom. I also had a similar, quite heated discussion over dinner at State of the Map 2008. People also seemed suspicious of the motivations behind Firefox's recognition of EV certificates.
This page explains why Firefox's SSL UI was designed the way it was, and why we think it's right.
Security = Encryption * Authentication
Before we begin, we must understand that Security = Encryption * Authentication. World-class encryption * zero authentication = zero security. This is an absolutely crucial point which sadly is not grasped by many participants in the debate. There's no point in encrypting your data with a secret key if you don't know whose key it is and who you are sending it to. That is a recipe for a Man in the Middle attack. Lauren Weinstein said:
But in many situations, we're not concerned about identity in particular, we just want to get the basic https: crypto stream up and running.
But there is no such situation. If you don't know who you are talking to, then encryption is useless because you could be talking to an attacker. (Self-signed certs actually do provide some ways for you to know who you are talking to in a limited fashion, and we'll come on to those.) If you don't do anything to check whether you are talking to who you think you are, or if you do the wrong checks, then all the crypto is a waste of time.
Some argue that adding encryption, even without authentication, is a bar-raising exercise which defends against basic packet sniffing. Nat Tuck says:
Snooping a connection (i.e. on a wireless link) is much easier than any of the impersonation attacks that SSL authentication prevents.
However, that's simply no longer true. The newly-raised bar is still at the point-and-click level - MITM attacks are that easy these days. For more details of just how easy, see johnath's blog post.
So some authentication is required. But how much? Let's look at three use cases for secure connections on the web. They are, in decreasing order of authentication strength:
- You want to know what organization you are talking to.
- You want to know that you are connected to the right domain name.
- You want to know that you can repeatably connect to the same person or place.
1: Strong Organizational Identity
Scenario 1 applies in situations such as banking or e-commerce. Rather than have to know and remember that http://www.bankofamerica.com/ is Bank of America, whereas http://www.bank-of-america.com/ is not, Joe Public would much rather that their web browser said "Bank of America, Inc. (US)" to remove all doubt. Joe would like to know that someone had actually gone away and made sure that only the real Bank of America in the US can get a cert which says that, and if he gets scammed, that someone knows where to send the police.
This is, in a nutshell, the service that EV certs provide. Now, some have said that this is what the CAs should have been doing all along, and how come they get to charge more money for it now? Even after noting that DV isn't good enough, Robert Accettura still claims that:
Essentially EV SSL is nothing more than a scheme to charge more. EV SSL is supposed to do what a signed certificate should have been doing all along.
That second sentence may be true, although one can argue about who was responsible for the pre-EV situation in the certificate market. For a long time, certs with almost no checking looked the same in browser UI as certs with good checking. So all the incentives were to do less checking. But then, no UI differentiation was possible, because there was no independent standard for what "good checking" meant anyway. There were faults on both sides.
Prices for standard domain-validation certs are now as low as free - and for that money, you get no identity checking at all. EV certs cost EV prices because the CAs actually do all the checks defined in this document (PDF), and are audited to make sure they do. If you think EV doesn't add any extra protection, tell us how fraudsters can fool those checks. We helped write this standard, and we will fix anything that is broken.
We think that EV provides us a way to put a human-readable organizational identifier in the browser UI with high confidence that it's correct. There is no other existing technical way to obtain this sort of reliable identifier. And we think this is useful in helping people to know the identity of who they are talking to. That's the value EV provides - authenticated identity.
2. Domain Verification
Scenario 2 applies in situations like webmail. Your email address is fred@myispmail.com, so you want to know you are at myispmail.com, and you want encryption, but you don't particularly need to be told who runs myispmail.com - you know it's MyISP.
One important thing to note is that DNS is not secure - it can be spoofed, as Dan Kaminsky has recently demonstrated. (Dan has some interesting thoughts about the interaction of DNS flaws and certificate flaws. His one line summary: "weak authentication leads to pwnage".) DNSSEC is still some way off and, until it arrives, you can't be sure that the IP address DNS returns when you look up "myispmail.com" is correct. You could be sent anywhere. SSL certificates from a trusted third party are the only way of ensuring that you are actually connected to the site you asked for. That is the service you are getting with a DV cert. This does not have to cost you anything. The following providers supply free certificates1:
If none of the above suit you, year-long all-browser certificates can be had for as little as $14.99 from some vendors.
In an ideal world, being sure you are connected to the correct site would be something the Internet provided for free. DNS was designed in a time when people didn't think malicious people would try to subvert it, and the proper fix has not yet been deployed. Until it is, IMO, the hassle of obtaining a free certificate is not an exorbitant price to pay to fix the security hole.
(Excursus: if domain ownership is validated by sending email to a contact at the domain, then this sort of certificate could be fraudulently obtained by an attacker who is able to control the CA's DNS or routing by Kaminsky-like methods, BGP or similar attacks - because they can intercept the email. So in fact, in some ways, the security of such certs does depend on the security of the DNS. Which is worrying.)
3. Connection Repeatability
Scenario 3 applies when you want to make sure you are connected to the same person you were connected to before.
This is where the supporters of self-signed certs say that they come in. Often, they say: "$14.95 is still too expensive - why can't I self-sign my cert?" Frank notes that this form of the argument is a non-sequitur. The security or otherwise of self-signed certs is not connected to the cost of CA certs. But we still should address the issue.
We've already noted that if you have zero authentication, you have zero security. Self-signed certs provide non-zero authentication if you do the following:
- Confirm the key fingerprint out-of-band (i.e. by non-web means) on first connect.
- Make sure it's always the same thereafter (software normally does this for you automatically).
- Re-confirm the new fingerprint out-of-band if the key changes.
If you do these three things, you get a repeatable secure connection to whoever it was you contacted out-of-band in step 1).
Leaving aside the fact that many people who use this model for SSH don't bother to do 1) in practice but just say "OK" and hope, it is our assertion that no-one has yet come up with a UI that makes this model of crypto (known as Key Continuity Management - KCM - or "the SSH model") understandable to Joe Public. You can't provide him with a string of hex characters and expect it to read it over the phone to his bank. What he does instead is just click "OK", which might as well be labelled "Yeah, Whatever", and hopes for the best. The same thing happens when he gets "key changed!" warnings, even scary ones.
The first important thing to note about this model is that key changes are an expected part of life. No-one does or should use the same key for ever, and key compromise or discovered weakness means that keys change. So the user is going to get a series of alerts over time, some of which indicate an OK condition, and some of which indicate a dangerous condition. It is our assertion that no UI can navigate Joe through this complexity in a safe way.
Usability research tells us that repeated security dialogs and warnings habituate users into just clicking "OK" - it's the "Yeah, Whatever" thing again. If that dialog mostly indicates a benign condition but occasionally indicates a serious one, then the problem is compounded. This happens no matter what the dialog says. UI designers can work on the wording for a year, but whatever it is, it'll eventually just get ignored.
Secondly, there's no protection against compromised keys. If someone gets hold of your private key, they can impersonate you at will - and there's nothing you can do about it. The revocation story for SSL certificates has historically also been poor for patent and performance reasons, but that's changing with the advent of OCSP, which is required for EV certificates from 2010. Key compromise is not just a theoretical problem - almost all SSL keys generated on Debian systems for the 18 months to May 2008 are totally compromised, due to a flaw in the random number generator. Attackers can work out the private key if they know the public one.
There's also a privacy issue - the browser has to keep a list of SSL sites you've visited and (unless you want yet more cert change warnings) can't clear it when you clear your history etc.
"OK," you may say. "This model is not for Joe. But why can't I use it? I understand the risks. I promise to confirm my key fingerprints out-of-band for every new connection or key change. Really. I promise."
The crucial problem here, that cannot be stressed enough, is that while this is a reasonable desire in a (very small) constituency of geeks like us, we have not found a way to make it any easier for geeks to use the KCM model without putting at risk all the people who only ever use the standard model. The certificate change warnings which are a regular part of life in KCM indicate an attack in normal SSL usage. We don't want to minimize the seriousness of the warnings that protect users in normal use, or the difficulty of bypassing them.
But what about Intranets? Why should they have to pay? There are two possible answers. One solution is to install the company's root in the browser. Everyone can do this manually or the IT department can use the Client Customizability Kit (CCK) to make a custom Firefox.
However, running your own CA has its own hidden costs - and you normally discover them after a key compromise when you have to update all the certificates at once, and everyone has to learn a lot about crypto really quickly. A simpler solution is just to get in touch with StartCom, or budget for a few expenditures of $14.95 or whatever, and use the same public CA system everyone else does.
Conclusion
This issue is not as simple as it appears. We have done a lot of thinking about what's possible and safe, and what isn't. Like all that Mozilla does, this is driven by a desire to protect our users, not a desire to make people pay for SSL certificates (why would that be a goal?). We are open to suggestions, but think the current UI hits the right balance.
Bonus Section: Slashdot Myths Rebuttal
All of the below are quotes or ideas from Slashdot comments.
- any cert can be compromised within seconds after it is issued, [...] therefore, certs provide NO assurance you're connected to who the URL indicates you are.
- A cert can be compromised "within seconds of being issued" if e.g. your website has already been hacked when you install it - in which case, you have bigger problems. Even taking this rare possibility into account, the second statement is only true if there is no revocation. In other words, it is true of self-signed certs but not of e.g. EV certs.
- CAs who charge nothing cannot be performing any identity assurances at all
- Many companies choose to give away particular services for free, which have a non-zero cost associated with them, for business reasons.
- Verisign is a monopoly. The 100 CAs in FF are all monopolies
- This is somewhat self-refuting. Depending on how you count, there are over 40 different CAs in the Firefox root store, of which many are also in the root stores of the other browsers. They provide certificates and services at a wide range of price points.
- opposing self-signed certs proves you're in favor of Verisign's monopoly
- Verisign doesn't have a monopoly (see above). This argument is a case of poisoning the well.
- self-signed certs provide better identification than no SSL certs
- A false sense of security is worse than no security.
- certs cost at least USD$100 per year, certificates costs are horrendous
- See the main text. In fact, for many people or organizations $100 is an upper limit because for about that much you can get a wildcard certificate for as many websites in the same domain as you like.
- "chained certs" are inferior to those issued directly by a root CA.
- Chained certs mean that the CA does not use their embedded root certificate directly for signing, which means they can have it locked up somewhere safe and inaccessible. Therefore, certs which are chained are, if anything, safer.
- CAs bribe Mozilla to put their certs into Firefox
- Mozilla does not solicit or accept payment for inclusion in the root store.
- No certs provide any real assurances at all because anyone can get a cert for any organization name or any domain name they wish.
- No-one should be able to get a cert a domain name they don't control (unless they self-sign it). If you have evidence they can, let's see it. DV certs do not vet the O (Organisation) field, but Firefox does not show that field in the UI for such certs, so that's OK.
- Verisign is a fraud because people have signed malware with authenticode certs issued by Verisign.
- Certificates are about identity, not virtue.
- CACert is the only free CA
- See the main text.
- CAs do nothing more than run OpenSSL, which you can do for yourself for free.
- Check out the WebTrust Principles and Criteria to get some idea of the levels of integrity and security CA systems have to meet.
This article is licensed under CC-BY-SA 3.0 or later.
[1] And what of CAcert? CAcert's unique position has required Mozilla to be very accommodating in terms of making sure they can fit with our procedures. They don't yet have the "fair-but-firm" audit that our requirements demand, but their progress is here. (Note that one item on that list is "new roots".) We would be very happy to see that process resolved as soon as possible, and to have another free, but audited, option in our root store.