Improving Authentication On The Internet

Version 0.4 - 2005-05-12

Introduction

The current system for securing end-user transactions over the Internet consists of information transfer via HTTP over SSL, with trust established using server-based certificates. The components of this system need re-examining in the light of the current threats to Internet-based commerce.

Threat Analysis

There are three classes of threat to secure transactions over the internet, which are within the domain of this paper. (Threats such as server compromise, company employee dishonesty, trojaned clients and so on are outside its scope.) They are:

  1. Eavesdropping (someone is listening to my conversation)
  2. Impersonation (I'm not conversing with who I think I am)
  3. Scamming (I'm conversing with who I think I am, but they are dishonest)

The difference between impersonation and scamming is as follows. Impersonation is where I think I'm conversing with Barclays Bank, but actually I'm knowingly conversing with www.secure-barclays.co.uk, who I assume are Barclays but are not. Scamming is where I am conversing with what appears to be a legitimate organisation such as a business, but they misuse the information I give them.

Is "scamming" the best word here? It needs to be specific enough not to include those items covered under impersonation, so words like "dishonesty", "fraud" and so on don't work. "Misrepresentation"? "False pretences"?

Current Threats

If we look at which of the threats is most prevalent in May 2005, the answer is clearly impersonation, in the form of "phishing". Phishing is the setting-up of fake websites purporting to be those of existing well-known entities, with the aim of harvesting valuable information such as bank login details or credit card numbers. The existence of the 300-member Anti Phishing Working Group is evidence of industry concern over this issue.

No-one is cracking the encryption on secure connections because the value of the data secured by a single transaction is generally far too low. This is unlikely to change; as cracking hardware gets cheaper, key lengths get longer and cracking gets harder. But the reason that there are not so many complex scamming attacks is not technical but pragmatic - impersonation works, and it is so much easier and cheaper. As impersonation gets harder, scamming will rise.

Today, almost all phishing is conducted over non-secure channels, putting the combatting of it outside the scope of the model. (That is to say, we have an important task to drive phishers onto SSL, but that how to do it is outside the scope of this paper.) However, as user awareness and education improves, phishers will look to add extra legitimacy to their sites by providing a "secure" connection to better ape legitimate sites. Then, the threats of impersonation and scamming will start to impact the SSL model. It is useful to examine how well it protects against these threats.

Privacy, Validation and Authentication

To combat the three named threats, the model must provide the following properties:

  1. Privacy - stopping people listening in - combats eavesdropping.
  2. (Domain) Validation - knowing that you are talking to www.good.com and not www.evil.com - combats impersonation.
  3. (Site Operator) Authentication - knowing that the police can find the owners of www.good.com if they turn out to be crooks - combats scamming.

For our purposes, privacy from eavesdropping remains unchallenged. Privacy is provided by encryption, and any variation is a function of the strength of the encryption used. However, using high-grade encryption for all transactions is technically easy and financially relatively inexpensive. Notwithstanding the enormous efforts some go to for small successes, no-one seriously argues that e.g. the advanced AES-256 algorithm can be broken by an eavesdropping attacker.

Validation, at least of domain control, is also not currently an issue. Assuming the method of contacting a domain owner is secure, domain validation can also be provided with relative ease.

Authentication, on the other hand, is much more of a continuum, because it involves those tricky real-world concepts of identity, trustworthiness, honesty and so on. It's also hard to measure, and the methods for ensuring it change over time. Authentication is needed partly for prevention (attackers will be reluctant to reveal information about themselves) and partly for after-the-fact accountability. As Bruce Schneier points out, we can never be 100% successful in preventing attacks, and so enabling detection and response need to be a part of the solution.

I therefore suggest that privacy, validation and authentication are related as shown by the diagram to the right.

Normal web connections over HTTP have no privacy except that accidentally provided by network topology, no validation except for that provided at domain registration or IP-issuance time, and no significant authentication. It therefore falls into the top left square.

A security model which provided only privacy (bottom left square) would be like SSH - when you connect to a site for the first time it provides you with a key fingerprint. You then need to use external means to make sure that key fingerprint belongs to the person you think you are talking to. Only after that can you guarantee you are always talking to the same person. While this model has worked well in those areas where SSH is used, the fairly obvious unlikelihood of millions of end-users going through this process for each secure site they visit means that it's not appropriate for SSL. The distinction between the top left and bottom left squares is not relevant to an end-user.

A model which provided only validation (top right square) would be one which made sure you were connected to the site you thought you were, but allowed anyone to listen in. The use of Secure DNS with HTTP is in this category.

A model which provided only privacy and validation (bottom right square, base of arrow) would mean that you connect to a site, and you are certain you've connected to that site, but you have to use means external to the SSL model to get authentication - i.e. to make sure the site is owned by who you think it is, and that the site owners are trustworthy. Examples might be "Personal Computer World recommended them", "My brother bought something last month and it went fine", or a browser plugin from a trusted provider which referenced a list of trusted sites for you and showed its findings.

Privacy and validation are prerequistes for authentication. If you have no privacy, the supposedly-authenticated entity could claim "we were eavesdropped". If you have no validation, they could claim "you weren't actually talking to me". Therefore, the authentication continuum is rooted only in the bottom right square, and proceeds from there.

How Secure Transactions Work Now

The current model works as follows. Web browsers include a number of "root certificates", which belong to Certificate Authorities (CAs) and are distributed with 'trust bits' set by default. Browser manufacturers choose whose root certificates to include; their methods of choosing vary. A number of well-known CAs, such as Verisign and Geotrust, have their root certificates in all major browsers.

When someone comes to a CA with a request for a certificate ("cert") for a particular site, they do some verification to check that the person asking is allowed to have that certificate. Validation is provided by checking that the person asking for the certificate actually controls that domain. Beyond that, different levels of authentication can be reached by doing different sorts of checks. The amount of authentication varies from CA to CA, and even between different products of the same CA. After verification, the CA uses its root certificate to sign a server certificate for that site, and hands it over to the requestor.

The procedures of a CA, including the amount of verification done, are set down in a document called the Certificate Pratice Statement (CPS), available from the CA's website. A CA's compliance with their CPS is maintained via independent audit.

When a browser visits a secure website, it displays a warning if it is not able to validate the certificate - that is, if it is not signed by one of the root certificates. Thereafter, the browser's "secure" UI is displayed as normal. For that site, it represents secrecy only, even if the user chooses "don't warn me again". If the certificate is signed by a root certificate, the UI appears automatically, but this time representing secrecy, validation and some unknown level of authentication.

Consumers are generally not aware of these fine distinctions. Those who look for the UI (usually a lock icon) at all consider it a binary sign of "security", and are encouraged to do so by banks, merchants, browser vendors and CAs. The presence of the lock is treated as a positive answer to the question "can I safely use my credit card number or do online banking on this site?"

If a certificate is incorrectly issued or the private key is compromised and the problem is discovered, the CA revokes the certificate by publishing its details in a Certificate Revocation List (CRL). If a CA suddenly issues a sufficient number of fraudulent certs, the browser manufacturer could produce a security update to their software which removes that CAs root cerificates from the store.

Shortcomings Of The Current Model

No Guarantee of Authentication

The current SSL model, as used in browsers today, provides good privacy and validation, but provides no guaranteed authentication. The level of authentication varies from CA to CA, but there is no objective measure and so the level is not encoded in any standard way into the certificate, and cannot be displayed to the user. Even though validation alone is insufficient to combat phishing - a validated connection to www.paypal-payments.com is just as unsafe as an insecure one to 12.34.56.78 - recent changes in the market show some CAs having to reduce the amount of checking they do in order to compete with other CAs who do not provide authentication. Some CAs already expressly advertise and sell certificates with almost no authentication at all.

The audits a CA undergoes merely make sure that the procedures they follow are those in their CPS. The audit makes no comment on whether the procedures are adequate for establishing a particular level of authentication. A CPS which said "we issue certificates to everyone with no checking at all" would pass audit if the CA did what it promised.

No Revocation

A certificate, once issued, can't be revoked - at least, not practically. No browser looks at CRLs by default, because they can run to hundreds of K in size and would have to be downloaded on the fly. A protocol called Online Certificate Status Protocol (OCSP) was invented to get around this issue - it allows checking the state of a single certificate in real time. However, for various technical reasons only one browser (Opera 8) performs this check by default.

No (Practical) Removal

If a CA started issuing certs with no verification at all as a matter of business practice, the only recourse a browser manufacturer has is to remove their root cert. This then causes error popups and warnings for each user who visits a site secured with a cerificate signed by that cert. For large certificate authorities who issue the certs for many popular sites, this would be practically almost impossible. To date, I know of no instance where such a removal has ever happened.

Some Things Worse Than Nothing

Using a self-signed certificate, or one signed by an unknown CA, currently pops up a warning dialog, which appears more scary to the user than plain unencrypted HTTP. The two are actually equivalent from an end-user point of view. In some browsers, connecting to a server providing only 40-bit encryption is also scarier than nothing.

Economics of Phishing

Before I suggest some solutions to the aforementioned problems, it is useful here to take a quick look at the economics of phishing, and how they might affect our choice of actions.

What drives phishers to be so aggressive - the desire for money - is also their Achilles heel. Phishers will stop phishing when it's no longer economically sensible to do so. Currently, attempting an SSL phish can cost next to nothing, and the gains are potentially great. We need to reverse this. There are two things you can do to make obtaining certificate to use for fraudulent purposes financially unviable. You can increase the cost, or potential cost, of obtaining the certificate, and you can decrease the gain possible after the certificate has been obtained.

It would be hard to increase the financial cost of obtaining a certificate sufficiently to deter phishing without also deterring a lot of legitimate uses. Therefore, the increased cost has to be in terms of revealed information (useful to law enforcement) rather than money - i.e. in greater authentication.

Once a certificate has been issued, the only way to decrease the gain possible using it is to make it useful for less long. This would involve establishing and using a real-time certificate revocation infrastructure.

Proposed Changes

Separate Security, Validation and Authentication

The browser UI should separate the display of security, validation and level of authentication. This allows the user to know more exactly what level of protection they have against fraud. It is hoped that browser manufacturers could collaborate on defining the form of this UI, as it is important that all browsers maintain UI consistency in order to make it possible to define a simple consumer message. Some concrete suggestions are in Appendix A; they are separated out because people may wish to agree with my conclusion here but disagree with my suggestions.

Define Authentication Levels

We should define a number of authentication levels that the UI should show, in terms which a consumer can understand and which can be referenced in the simple consumer message. The number of levels should be as low as can be got away with. Again, a concrete suggestion is in Appendix A.

By issuing certificates with non-zero authentication levels, the CA would be assuming some level of liability for any losses caused by a failure to screen out fraudsters. Exactly what liability they are assuming is again beyond the scope of this paper, but it would need to be such that this whole exercise isn't merely "security theatre". The costs of certificates at each level would probably reflect the level of liability the CA assumes.

Regular independent audits of verification procedures should be required for all CAs. The results of each audit, and the nature of the verification procedures, must be public. These audits should check that the amount of verification done for each type of cert issued is sufficient for the authentication level for the root it is issued under. If the authentication level is "none", the audit merely needs to ascertain that the CA's method of contacting a domain owner is secure. Beyond that, exactly what checking should be done for each level is (thankfully) well beyond the scope of this paper, and is a matter for the CAs, trust experts and the auditor to work out between them.

Of course, browser manufacturers have the final say about which authentication level they mark each root cert with, and would reserve the right to alter the suggested level downwards should there be significant levels of nefarious activity associated with certificates issued from that root. This allows for finer-grained control than merely removing certificates altogether.

Ideally, certificates with different amounts of verification should be issued from different roots or sub-roots. The browser can store all of these in its certificate store, and mark them with its current view of their authentication level. (In practice, some root-cert-specific heuristics may be necessary for legacy roots.) For certificates with authentication, the fields (OU, O, C etc.) should be filled in with data that is correct according to the CA's knowledge, suitable for display to the user. For certificates with no authentication, the values should either be blank or clearly reflect the CA's lack of knowledge of the correct values.

Enable Revocation

For authentication levels above "none", quick certificate revocation is needed to reduce the value of fraudulently-obtained certificates. The only game in town for this, technically, is OCSP. Therefore, it should be a requirement that certificates issued under roots marked for such authentication levels must have embedded OCSP URLs, pointing at a working OCSP responder.

Make All Insecure Connections The Same

SSL connections which do not provide sufficient privacy or validation should be shown in the browser UI as plain HTTP connections, with the exception that it must be possible to gain access to the details of the certificate, and it may be necessary to provide an explanation why this "https" connection is not marked as secure. None of the UI used for private/validated/authenticated connections should be used.

Into this category of "equivalent to plain HTTP" I would put:

  • Self-signed certificates
  • Certificates signed by unknown CAs
  • Connections providing only 40-bit encryption

Define A Simple Consumer Message

We should define a simple consumer message, which can be spread by CAs, browser manufacturers, banks and merchants. What the message is, depends on the UI used to separate security, validation and authentication. A suggestion for a message is given in Appendix A.

There will need to be a period of transition when moving from the old to the new arrangements. Authentication levels need to be set, procedures need to be devised, audits need to be performed and new certificates need to be issued. To avoid confusing the consumer message with transitional information, we may want to have a flag day, such that browsers have timing code to turn on the new UI all at once when the other arrangements are in place. This could be accompanied by a concerted publicity campaign around the consumer message that explains that UI.


Appendix A: Concrete Suggestions

Authentication Levels

I suggest there should be three authentication levels, named around the most commonly-performed activites by users over secure connections. I am conscious that this does perhaps present an overly e-commerce-centric view of SSL, but it's difficult to balance complete accuracy with having a simple consumer message. The levels I suggest are:

  1. None
  2. Shopping
  3. Banking

The "none" level must exist, because there are legitimate uses for certificates which show no more than domain control.

The "shopping" level would be the most common, and used on sites where compromise of information normally put into that site would lead to the loss of a single credit card number, or equivalent.

The "banking" level would be a premium product suitable for companies or organisations for which a very high level of trust is needed. If compromise of information normally put into that site would lead to an attacker having access to a user's bank accounts or financial records, the site should have a certificate at this level. Of course, there's nothing to prevent shops buying these certificates, but I suspect that they would be significantly more expensive and inconvenient to obtain than "shop" level certificates.

Browser UI

Unlike the rest of this paper, this appendix is based around UI for Mozilla Firefox, although I am suggesting it as the UI to be adopted consistently across all browsers. The following discussion is based on the established Firefox security UI principle that there should be a piece of ever-present and reliable UI from which users can make security decisions. After much discussion, it was decided that using the status bar for this purpose was the best compromise between making the security context clear, and permitting believable and rich web applications to be written.

Any discussion of how the UI should work for the new model has, unfortunately, to take account of history, in the shape of the question "what do we use the lock icon to represent?". There are two main schools of thought here.

The first view is that which says that, in the real world, a lock is all about privacy (in the colloquial sense), and so we should use the lock icon for all SSL connections which provide privacy and validation (in their meanings I have used them in this paper). This basically means that it would be used for all SSL connections except self-signed. The validation UI would then be a separate indicator.

The alternative view is that the lock is currently associated in a user's mind with some level of authentication, even if that's not currently true in practice. And therefore the lock should appear when some non-zero authentication level has been reached.

I believe that the first view is preferable, both in terms of making it easy to define a simple consumer message and in terms of minimum change to its current de facto meaning and the meaning one could assume from its form. I therefore propose a separate UI for authentication level - with the bottom level being solely the lock, and then adding extra symbols as authentication increases.

The simple symbol most closely associated with money around the world is the dollar sign, which is the currency symbol in many countries. This is the best candidate for a worldwide symbol. Another suggestion is a pile of coins.

For sites accessed over SSL, Firefox currently shows a domain indicator in the status bar. This is an anti-phishing measure to deal with URL bar complexity, and has the meaning "we are sure that this is the name of the site you are on". This fits very well with the concept of validation, and should be used to represent that.

I would therefore propose a set of UIs something like those given in the diagram to the right (with apologies for my poor artistic skills.)

The use of additional UI to indicate authentication level allows the consumer message in general to be more a case of "Hey! Look what browser manufacturers and CAs have done together to improve your safety!" rather than "Hey! The lock now means this level of trust, but this something else means this other level..."

Simple Consumer Message

My suggestion for the Simple Consumer Message would be something like (with suitable illustrations or footage of the dollar sign UI) "When cash is at stake, check for the money! One for shopping, two for banking!". This leverages the small number of levels, the fact that they are defined in a way users can understand, and the novel nature of the UI.

Appendix B: Changes Not Recommended

Display of CA name

It has been suggested that the browser show the name and/or logo of the Certificate Authority in the security UI. The idea is that this allows the user to make a judgement as to the security of the connection based upon their knowledge of the reputation of the CA in question. There are a number of excellent arguments against this; I will restrict myself to two main ones.

The brandable space available in the UI for such an indicator, if it is to be displayed at all times, is probably about 15px high by 100px long - a portion of the status bar. There is basically enough room for the company name/logo, and little else.

Nevertheless, in order for a "trust market" to be established, the user needs an extremely deep understanding of CA brands. Users would need to purchase or not purchase based on their view of the trustworthiness of the CA brand protecting the site. In other words, the following scenario would need to take place regularly. A user visits a web shop, spends half an hour filling up their basket with a number of goods, goes to the secure checkout and then, on the basis of their perception of the trustworthiness of the CA who signed the cert protecting the checkout page, abandons that basket of goods and goes and shops elsewhere. I suggest that this is entirely unrealistic. You are pitting the CA branding established by a small rectangle in the browser UI against the multi-million dollar advertising campaigns of Gap or IBM, combined with the user's desire for the goods they are about to buy. It's no contest.

Secondly, there are 35 CAs with root certificates in Firefox 1.0, with more in the queue for later releases, and 52 in IE on Windows XP. A user would need to be aware of, and have opinions on the trustworthiness of all of them. Otherwise, what should he be advised to do when encountering a site secured using a certificate from a CA he does not know?

Sensible security practice would suggest "don't use the site"; however this advice, if universally followed, would have a seriously detrimental effect on the amount of web commerce and, if persisted with, reduce the CA market to a small handful of players with big marketing budgets. Even in that case, the value of the total certificate market is probably not sufficient to allow CAs to do consumer-oriented marketing in all the countries of the world with Internet access. On the other hand, the advice "use the site anyway" destroys most of the point of the branding in the first place. CAs would have an incentive to make sure their brand and logo was not known!

It's important that security UI is stable - that is, it should be the same on every visit to a legitimate site, and different on any visit to a dodgy site. If a CA trust market were in operation, a switch from Just-Gone-Dodgy CA to More-Trustworthy-CA would be a good change (site upgrading its security), but a change in the reverse direction would be a bad change (possible phishing attack). A user would need very deep knowledge of the CA market in order to distinguish the two changes.

These factors and others together mean that presenting the CA logo in the UI would confuse the user far more than it improved security, and take up screen space that could be used for other, more useful indicators.

Original URL: http://www.gerv.net/security/improving-authentication/