Remember visited SSL details and warn when changes, like SSH
This was originally proposed by Ian Hickson AFAIK, filing as bug to track this.
We should have the browser track details of SSL connections, something like SSH
does. On first connection to an SSL-enabled site we should tell the user they
have not visited it before and asking if they want to proceed. Then we should
store information about the cert and the domain, and warn the user if the cert
the server provides changes. These warnings could perhaps be added to the
warning dialogs that secure connections already use.
For privacy reasons it should be possible to clear this information, maybe at
the same time as clearing page history. Of course, if you do that, you also lose
the protection this provides.
This should help against phishing.
Gerv has a proposal that would only do this for the server names (when connected
through SSL): http://www.gerv.net/security/phishing-browser-defences.html#new-site
While I didn't mention it earlier, I was thinking about storing information as
On one specific point - what identifier to store on as an index.
I was informed by the SSH guys (well, one really) that it was critically
important to the security model to index the history on exactly what the domain
was that the user typed in. In SSH terms this means that if the user were to
go to 192.168.0.1 and see a key there, then this would be one entry. But if
the user were to type in localhost and see the same cert there, it would still
get another entry.
Now, whether this changes for web based usage or not, I'm unsure. Having said
that, iff the idea is a good idea, then just doing it any way would be good,
and details like the precise indexing can be fixed up in future releases as
more experience comes in.
Gerv's proposal doesn't store the server name, rather a hash of the server name.
Something like that reduces the privacy concerns somewhat.
Keep in mind that even on legit sites the cert is likely to change annually as
they expire and are replaced. Be careful that your scheme doesn't result in too
many warnings about cert changes that users tune it out and don't notice when
they've really been phished.
If certs are involved, then I think some changes could be handled automatically.
For example, if the previously seen cert has expired, and the new cert is
identical to the previous except for validity period, it would probably be ok to
accept it automatically.
#4. I'd agree with that.
The critical change is when a new cert comes in signed by a *different* CA. In
the event that this is a bad situation, both CAs can disclaim by pointing the
finger at each other. The bad CA just shrugs and says "I followed my
established and audited procedures...." In practice, even a little finger
pointing will break any semblance of CAs backing up their words.
This split of protection across two CAs means that the user must be notified
and must make a decision based on her "other sources". I.e., the decision must
be punted up to higher human layers as a potential phishing warning.
If it is however the same CA, then even if the new cert is bad, the one CA is
now on the hook for all aspects of the fraud. It's easy to pin down who is to
blame then, they should have checked what they already had for that domain.
(Which is not to say that the user shouldn't be notified, but there is some
leeway here.) The user may still get robbed, but then we move on to punishing
the CA in the marketplace by revealing their delitos.
Also note that this approach results in an incentive to stay with the same CA.
My attitude to that is "so be it." The price of a little security, and our
goal is security, not trying to manipulate the market for CAs in one direction
or the other.
There are already RFCs governing the validation of X509 certificates, such as
RFC3280. The rules are quite clearly defined. And they are not at all the same
as SSH. In particular, it ignores the fact that a given entity can have multiple
certificates and keypairs valid at a given time. Thus, caching them and assuming
them to remain static for a given period of time is simply incorrect.
This proposal is clearly going to break a great deal of legitimate cases.
certificate renewals are just one case. And they don't happen just once a year.
Entities are free to make much more frequent changes. I'll just give one example
where your scheme completely breaks. A public entity can also have multiple SSL
server certs and keys at a given time, for example with multiple SSL servers
behind a server farm, each with private keys stored in an HSM that can't be
replicated accross different SSL servers in the farm, thus requiring different
SSL server certificates for each one in the farm.
This proposed scheme would simply be incompatible with such cases. The user
would be prompted every time he visited the public SSL web site that he hasn't
been there before, because he is really visiting a different web server in the
server farm, and that server serves a different SSL certificate, thanks to
front-end load balancing.
Wow! That's the first time someone has suggested a substantive reason why
SSH-style caching won't work in the SSL world that I know of! OK, let's work
What you are saying is that a "logical server" can dynamically use a set of
certificates, choosing one at its sole discretion for any given connection.
What about if we declare arbitrarily that a "logical server" can only use
certificates from one CA? So, as per #5 above, if it is inside one CA's
signing space, it's "valid" and if a cert is outside, then it's "uncertain" and
should throw up a question mark up to the human layer?
I'm whiteboarding here...
This would seem to fit with your described example - are there any other
examples where there would be a conflict?
Again, the suggestion referenced in comment #1 has none of these disadvantages. :-)
#1, #8. I assume here the domain used is that taken out of the certificate?
"Domain Hashing." This is a sort of half way house to the graphical
user-selected site logo of Herzberg & Gbara
You display a glyph where H&G display a logo selected by the user. Yours would
be easier to implement. Yet, the desire that other browsers would follow suit
I think is ... hopeful.
"New Site." The only difference I can see here is that you hash the
domain+password for some privacy. Is that what you mean?
"Phish Finder." Perhaps. I think this is a different problem than say spam;
the phisher only needs to slip through once, where as with spam, the user
benefits if the heuristics block 50%.
Re: comment #7,
Ian, that's a very arbitrary restriction you are adding there - saying that a
server can't have SSL certs from different CAs on different boxes of its server
farm . I'm not defending that practice, just pointing out there isn't any RFC
out there saying it's disallowed or wrong in any way - the same way no RFC says
I can't have multiple S/MIME certificates for my email address issued by
different CAs. This restriction would force a server host to change all the SSL
server certs for each box in his server farm if he uses a new CA for the cert of
any single new box .
Ian: "New Site" is the only one relevant to this bug; let's not clutter it up
with discussion of the other ones.
The differences between New Site and what's proposed here are that instead of
storing certs (with all the problems that involves, as has been discussed), we
just store domain names, and indicate to the user when they are visiting a new
one. Yes, they are hashed for privacy, although that's not the particular
feature which is relevant to the discussion we've been having from comment 4
It's not that arbitrary - it's right along the fault lines of the security
model. Let's put it this way. If a server uses certs from multiple CAs
then we have a problem ever protecting that server from any attacks on any
other CA. There is no way that different CAs can be expected to know what
other CAs have issued as certs. All they can do is their own due diligence,
and check their own databases. There is no central cert database to check a
phishing attack by.
Each country has its namespaces, so McDonalds does exist as different companies
in different countries. ... if all CAs are treated equal then this is an
unworkable model for security purposes, there are something like 10,000 banks
in the US, and the CAs outside the US aren't going to be too fussed when
someone turns up and says they are People's Bank, based in say Joburg (yes
there is one). It's already possible to get a cert with only limited due
diligence, which means a fairly minor attack would give me a cert that looks
like a big institution. The only reason that this hasn't been an issue is that
the certificates have never been attacked in anger, they've been simply
bypassed in phishing.
This we hope will change. We desperately want the browser to force the
ecommerce activity onto SSL in a blatant and unmistakable way. As programmers
move the browser on to a better war footing and force SSL usage to be more and
more obvious - by this bugfix and others - then more and more incentives will
exists to go get a dodgy cert using faxed false paperwork.
Dodgy certs will flow... the processes that CAs have in place today aren't
worth diddly squat if they are dealing with real crooks. So what we need to do
is draw firewalls between all the CAs, so that at the least, CAs can watch
their own patch. Check their own database. Do their due diligence on their
own customers and at least protect them, if not the customers of other CAs.
They can't possibly help when a CA on the otherside of the planet issues a
dodgy cert, but that's where the browser can help, by monitoring switches from
CA to CA.
Does that make sense?
re: comment #12,
Indeed, each CA cannot know what other CAs are doing at some other end of the
planet . The fact is that today, any root CA is allowed to issue certs for
anybody on the planet. Like it or not, that's the policy each CA has chosen,
which the mozilla foundation implicitly accepted by including their root CA
certs without any name constraints. Thus, just because I (or an SSL server) have
two certs issued to me from different roots, doesn't mean I'm conducting a
phishing attack. Your predicate is just incorrect; you cannot conclude anything
from that fact. Nothing tells you that one of the certs is invalid, or that one
of them is more or less correct than the other. Taking individually, they are
both valid. Taken together, they don't become any less valid.
What you want - making sure somebody obtains a cert only from his local CA, not
one at the other end of the planet - cannot be achieved, because DNS server
hostnames that we are trying to verify aren't even partitioned geographically.
Ie. there are .com TLD servers hosted all over the world, not just in the US.
The closest thing that can help is to use name constraints. I'll refer you to
RFC 3280, section 188.8.131.52 for more details. None of the root CAs are using it
at the moment. Don't ask me how to get the root CAs to use them. But it would
reduce the possible fraud for certs in the domains that are geographically
partitioned (eg. hostnames with country suffix TLDs).
However, an attacker trying to counterfeit SSL certs will by definition always
go to the most careless CA. By definition, the main solution to this problem is
to disallow the use of those careless CAs, by not trusting them.
Gerv (#11): can you confirm question in #9 for me? Are you extracting the
domain out of the cert, as is displayed by the status bar? Or are you using
the URL as typed into the URL bar (which might be redirected... or other wise
If the former, ok. What this amounts to is you are taking the identity as the
domain name and using the cert as a confirming primitive. (In which case
Pierre's issue is handled.)
There are always two things stored in the SSH model: the typed name and the
key. You are suggesting the name be the domain name and that the key not be
stored because it can confirm the (domain) name anyway.
Whereas in SSH, the key does not confirm the domain name. It's just a public
key with no cert aspects to it. So in SSH, the key *must* be stored and
checked against the name; but in SSL it would be negotiable as each new cert
can confirm the status bar domain name as long as we trust the cert.
Which would lead to needing to store the CA with with the domain name in your
proposal, if you follow my logic that you can trust the site's chosen CA, but
not any other.
And also deciding how to deal with multiple URLs ending up at the same
(Is your user-specified random string like the passwords db password?)
How's that? As we clarify these issues, it might help to upgrade your document
to include these things.
Julien, re #13.
> Your predicate is just incorrect; you cannot conclude anything
> from that fact. Nothing tells you that one of the certs is invalid,
> or that one of them is more or less correct than the other.
I'm not concluding either is correct or incorrect, I'm saying that it is
unknown and unknowable. If an attack is to be launched, it will not be
launched via the site's favourite CA, but via some no-name no-good dirt bag CA.
OTOH, if cert1 and cert2 are both by the site's favourite CA, then we can
conclude they are good. Both of them.
> making sure somebody obtains a cert only from his local CA, not
> one at the other end of the planet - cannot be achieved, because
> DNS server...
No, let me see if I can explain this situation. When a phisher attacks with a
cert, he will go and get it from another CA. So, if a different CA were to be
seen, this is an indication of a change. Either a renewal to another CA, or a
midstream change for unknown reasons, or a phish. At some stage this is going
to happen, and at some stage this information will be critical to the user.
The user will see this on the browser's security UI in some fashion. This is
the old / complete security model, this is not a new suggestion, just a model
that was dropped back in the good old days when there was no threat and crypto
was for fun. Now there's a purpose to going back to the old security model -
phishing - and that purpose is to protect the user against the attacks. It's
no longer for fun anymore.
So, when the browser displays the CA, the user herself will be drawn into the
security loop the same way she is being drawn in with the other proposal here
to show her when the certified domain changes. The users will then be in the
loop to keep their favourite sites in with the sites' favourite CAs. No DNS
needed. No CA action needed. It's all between the user and the browser. (And
> By definition, the main solution to this problem is
> to disallow the use of those careless CAs, by not trusting them.
Right, as long as you are saying that the user can see the CA and decide not to
trust them when this "careless CA" turns up. The alternate, that someone else
not trust them, like Mozilla Foundation, is a non-starter, because MF can't do
anything about the rogue CA's presence in the root list of all those deployed
browsers. It's all up to the user, once a CA goes rogue.
I wish I had the time to debate this topic further, but I really don't. In my
opinion, the proposal creates at least as many problems as it intends to solve.
Let me add that this bug illustrates why it is in MF's interest to limit the set
of trusted root CAs to known reliable entities.
Pierre, thanks for the input, the HSM issue was a bit of a surprise, but on
reflection, I suspect it comes out in the wash.
This is an ongoing issue. Where we are now is that the Mozilla root list is
expanding, but that expansion does not change the fact that nobody has as yet
put on the table any proposal that addresses *how* "to limit the set of trusted
root CAs to known reliable entities." See Frank's tireless search for that
proposal and/or method. Until someone can show that, I think we are stuck
without that nice situation ever happening.
I don't really want to point fingers at any CAs in particular, but I think it
needs to be recognised that reality is a lot of cheap certs and a lot of very
basic checks that are not going to stop any phishers.
You seem to be making this a lot more complex than it needs to be (IMO). Here's
- Store (in hashed form) every domain visited over SSL
- Warn the user when they hit a new one, in the status bar.
This is on the basis that users will have visited their high-value sites before
(to establish the relationship) but will never have visited a phisher before
(otherwise they'd have been phished already). So the normal site will not say
"new site", but the phishing site will, creating a difference the user can see.
You could remove the words "over SSL" and it would still work the same; the
reason to do it for SSL only is to keep up the pressure on high-value sites to
move to it.
I don't think there's a need to store certs, CA details or anything like that.
You were the one arguing for more fluidity in the cert market - we want banks to
be able to change their certs without alarming users.
Julian (comment 6) is right, I came across sites using different certs in
different parts of the site (e.g. citicorp), and there may be sites that use
different certs for exactly the same page when the keys are generated inside
(multiple) hardware boxes.
But I also agree with the concerns regarding the long list of CAs trusted -
equally - by the browser, where really they are not equally trustworthy... Even
the same CA issues different levels of certs and this is not known to users.
My preferred solution is TrustBar.mozdev.org, and I urge it to be incorporated
into the browser security UI. Namely, first time browser receives cert from a
certain CA (from its current list), it prompts the user to select whether to
trust automatically all identifications by this CA. If so, it displays the
name/logo of the site and of the CA (in the Trust Bar), whenever receiving a
cert from this CA. The user can also choose to trust only this particular
identification by the CA (in which case he'll be asked again for any new cert
from this CA).
This allows users to notice when a site comes with a cert from one of the rarely
used CAs, which is the kind of attack we have seen as feasible lately.
Best, Amir Herzberg
Ian's arguments are predecated on the existence of untrustworthy CAs who
will issue certs falsely for domain names that are justly certified by
one or more other CAs.
That situation may arise someday, if we let untrustworthy CAs into the list.
But that has not yet been decided, and is not yet even proposed by the
current draft Mozilla CA cert policy. If one of the CAs trusted under the
present draft policy were to issue a rogue cert, and did not revoke it,
that would be cause for removal from the trust list.
So I suggest that we do not assume it to be the case today. Folks, it
remains the policy that we TRUST the CAs that we've decided to TRUST.
Ian very much would like to see the world be very different and is doing
every thing in his power to influence it all to be changed.
But be aware that any such policy change must be carefully considered
and not implemented piece-meal in some parts of mozilla without considering
the security implications across the board.
I have to agree that Gerv's favorite proposal has none of these objections.
It does not represent a policy change regarding trust of CAs, nor does it
prevent one in the future.
Let's implement Gerv's proposal, and see if that is enough. Gerv, could you file
a new bug that is only about your proposal so we don't need to confuse things
between these issues?
I have a feeling we should and can do something more, maybe with the help of
certs, but there are clearly problems to work around as has been pointed out.
However, I do want to make it clear that standards, specifications, RFCs etc.
should not be blindly followed. Many specs leave things either unspecified, or
don't take security into consideration, or new security issues have been
identified after the specs have been written. There are a lot of places in our
code already where we limit what specs would implicitly or even explicitly allow
because we now know there are security issues. Breaking existing applications is
clearly bad and should be avoided if possible.
Above, I didn't mean to accuse Ian of any wrongdoing, and in retrospecet I see
that what I wrote could be construed to imply that I did. I only mean that
the suggestion that we need a solution to a problem of untrustworthy CAs will
influence some (who are not fully aware of the current CA trust policies) to
imagine that this is a problem that exists today, and to persue solving this
non-problem. I think we should focus instead on the problem at hand.
I do not think that security policy should be decided entirely democratically,
nor that we should relax out standards so that they no longer exceed the
average person's understanding of the issues.
I am afraid that this issue is going to be (unduely, IMO) influenced by the
sheer volume of words exxpressed on on side of this discussion.
This isn't a matter of blindly following standards and RFCs. There is a
large community of security cognoscenti who are all behind PKI. I respect
their collective judgements. However they do not speak much in mozilla's
forums and bug reports. Mozilla's forums do hear a lot of the dissenting
opinion, however, and it is possible for someone whose only understanding
of the issues comes from these forums to conclude that these dissenting
opinions are the majority opinions, the opinions of the experts. This is
how cults operate, the members hear only one view.
I simply want the readers of this bug to be sure that they're solving the
real problems today, not the problems of some hypothetical situation that
we are not in, and to be aware that the views so voluminously expressed by
a few in mozilla forums may not be representative of the majority view of
security experts, nor of the larger mozilla user community (who do not
participate in these forums).
ssh does not make a distinction between localhost, 192.168.0.1, or a web ip / uri
each unique connection is treated the same.
( unique as in ip number or domain name )
Jaqui (#23) this is what I meant, rightly or wrongly, by example:
localhost$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
DSA key fingerprint is df:42:71:0d:8b:db:98:ec:a8:0f:8b:3d:88:ed:04:07.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (DSA) to the list of known hosts.
localhost$ ssh 127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
DSA key fingerprint is df:42:71:0d:8b:db:98:ec:a8:0f:8b:3d:88:ed:04:07.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '127.0.0.1' (DSA) to the list of known hosts.
That is, even though this is a clear case of ending up at the same cert through
a known good mapping from localhost <--> 127.0.0.1, SSH will not optimise this
and in fact refuses to do so as a matter of security policy.
It will store two entries in its DB:
localhost ssh-dss AAAAB3NzaC1kc3MAAACBAOd.....
127.0.0.1 ssh-dss AAAAB3NzaC1kc3MAAA........
This is what is behind all my questions of exactly what each proposal is trying
to store as a) the index and b) the content found at that indexed name. This
feeds into my #2 above where for purpose of consideration, I was establishing
what SSH does .... whether this bugfix ends up following that pattern is a
completely different matter.
Just a post of clarification!
I've filed bug 286482 for implementing "New Site".
(Since this bug was based on something I originally proposed, I should point out
that my original proposal was also meant to eventually do away with CA-provided
certificates altogether. I don't trust Verisign, I don't trust any of the other
CAs, and I think my UA trusting them on my behalf is fatally flawed as a
security model. IMHO servers changing key or having different keys to other
servers at the same hostname or IP _should_ be raising eyebrows and the UA
_should_ tell me when that happens. But I don't expect dropping SSL to be a
LOL... let me share an open secret with you - dropping SSL is not likely to be
a popular idea ;)
Now, it may be a very fine idea on paper. But in practice, it isn't going to
happen. There is too much infrastructure, knowledge, parties, dox, certs and
whatnot in place. So a much more profitable view is to make some slight
enhancements and to add in the parts that got dropped because of UI space
Having said all that, it is fine to model SSL with CA-provided certificates one
end of a continuum and raw HTTP on the other end. Hopefully we can one day
walk the user from raw HTTP to something mildly better but still instantly
bootstrappable, hopping through "low" assurance certs and finally end up with
"high" assurance certs and complete end-to-end protection. This would be ideal
because it would dramatically improve the usability of SSL and thus improve
security by simply more usage.
One way to build that continuum is your suggestion - cache the names+certs in
the middle. Other ways are Gerv's suggestions and TrustBar. As a stepping
stone, these make it easier to then go to full CA-signed certificates _for
those high assurance sites that want that_. The big sites have reasons for
wanting those certs in place; and they shouldn't be denied that if that's what
they want and as long as the cost isn't on other people.
The challenge for the browser is to encapsulate that continuum in a way that
helps the user (which IMO reduces to one thing right now - helping against
phishing). "Doing away with certs" breaks the continuum and won't get any
(In reply to comment #1)
> Gerv has a proposal that would only do this for the server names (when connected
> through SSL): http://www.gerv.net/security/phishing-browser-defences.html#new-site
I'm not sure this proposal would work (based on storing hostane + cert-id) when
many server farms have wildcard or subject-alt names, and in many cases this is
for software (hostname) load balancing, like www1.mybank.com, ww2.mybank.com,
etc. Some server farms even have mixed, non-unigue but multi-domain certs spread
across them (valid for the same domain), based on the what the host need to do
for load balancing (I do this on a couple servers at work (for backups), right
(In reply to comment #18)
>You seem to be making this a lot more complex than it needs to be (IMO). Here's
>- Store (in hashed form) every domain visited over SSL
>- Warn the user when they hit a new one, in the status bar.
This proposal seems to suggest creating a new, trust model separate from ssl..
fine in theory, but it will lead the average layman to think one ore more of:
-domain trust means it okay even without ssl (?) (hence :80 spoofing /
-the two system are related, so of one says 'ok' then other is wrong, or both
are right (coutering ok's and ng's from the two).
From a 'solve the potential problems from their root' perspective, imo we should
untrusted root-certs (even from hosted from a non-ssl (domain spoofable) source)
(bug#267515), and I also believe that a more proactive notification (than mere
alerts or expecting the user to look at the lock or address-hostname)
security-info screen/widget needs to be built, like the one discussed here:
And since it's sometimes necesary to install new root certs (in a corporate /
university environment for example), a 'trust level' or 'flag' should be
assignable to root certs (and server/client,object), in case the corp or univ.
IT guy (let alone regular phishers) wanted to spoof your bank. This 'flag' along
with a list of any subject-alt/wildcard names, could then be listed in the above
descibed security-info scolling window when that domain is loaded (akin to
Thunderbird's 'you have n messages' widget).
Also, the new 'security' scoller could/should display CRL/OCSP status -or if
there is none (possibly lowering its trust), since those services really do need
a platform to promote themir use (which is what that widget could do, and for
which none exists right now, even among CAs).
my .02 cents.
> I'm not sure this proposal would work (based on storing hostane + cert-id)
> when many server farms have wildcard or subject-alt names
Who is suggesting storing cert IDs? The proposal is for a store of visited
domain names. And even if we did store cert IDs in a mapping, the wildcard issue
means it's a many-to-one rather than a one-to-one relationship. I don't see that
as a problem.
> in many cases this is for software (hostname) load balancing, like
> www1.mybank.com, ww2.mybank.com, etc.
If a particular user doesn't always get directed to the same one (which happens
sometimes), then the "new site" indicator will appear more times than is
strictly necessary for that site, thereby giving the user more pause for thought
than is strictly necessary. This is not such a bad thing.
> This proposal seems to suggest creating a new, trust model separate from ssl
Not at all. It tells the user when they are visiting a new site. No more, no
less. How the user interprets that information is up to them.
It's odd - you are not the only person who misunderstands this proposal as being
much more than it is. People in other bugs and in private mail have also talked
about it as a "new security indicator". It's not.
(In reply to comment #29)
> > I'm not sure this proposal would work (based on storing hostane + cert-id)
> > when many server farms have wildcard or subject-alt names
> Who is suggesting storing cert IDs?
Gerv, that is what SSH (mentioned in the subject) does (in a round about way):
cert+hostname. If the cert changes, new alert. If a virtual host points to the
same machine and cert, also an alert. And, comment #0 (the orig submission)
proposes storing the cert details.
Your idea as conveyed in comment #1, (matching only on hostname (hash)
apparently) doesnt protect against CA spoofs, nor is it server farm friendly,
and that it only works for ssl sites (apparently) could give the end user a
false sense of when it should be protecting them ("why did I get spoofed into a
non-ssl site but not get the familiar alert, which is hostname based?"). In
other words what I'm saying is that it would be *better* to use your idea on
both SSL and non-ssl.
> > This proposal seems to suggest creating a new, trust model separate from ssl
> Not at all. It tells the user when they are visiting a new site. No more, no
> less. How the user interprets that information is up to them.
If *your* proposal, which *in some statements* is different from the orig post,
(which descibes SSH style flagging of *both* hostname + cert/pubkey changes)...
will notify the user for *both*... newly visited http and https sites (you
stated not needing the ssl cert/id, only the hostname/hash, right?), I agree it
has some usefulness.
But not if it only works for ssl/x509 sites, since this new notice/alert's
*absense* in non-ssl sites could mislead end users about when to expect it's
protection (unless it is *explicity* tied to watching for new certs / pub-keys).
So now you'll see another use for the 'Notice' screen we discussed on another
thread - to display this new 'notice'... Or would you propose adding a new alert
dialog (annoying), or trying to fit it into the already crowded status bar? ;-)
This is not a good idea.
Why? because it is mixing different security models.
SSH's model works fairly well in a small population of fixed, seldomly changing,
machines and a reasonably well versed in what they are doing. For the most part,
users of SSH are trusting the underlying DNS system since almost no one ever
checks the actual signature. SSH is only secure in this way if the user is smart
enough to know when a 'new key' is unlikely.
Ian's second comment in this bug illustrates this point. SSH has to have some
domain based binding to the key on it's own, therefore the storing of the
binding based on the name. SSL already has that strong binding in the
In a world where you are connecting to machines you don't control, have no idea
when they are upgraded, etc., There is no way to know when legitimate key
changes (not just cert changes) happen, unless you have an infrastructure to
If you really what this behavior in your browser, you simply turn off trust for
*ALL* CA's. As you visit websites you will be forced to accept or reject
individual certs. You would then be notified when certs have changed out from
underneath you. It gives you exactly the SSH semantics (actually it tells you
more, SSH only warns if the key changes).
For the rest of the population, the generation of warnings for otherwise
legitamate cases will reduce security rather than enhance it.
Trying to turn this bug from what's said in comment 0 into the sensible version at
http://www.gerv.net/security/phishing-browser-defences.html#new-site is proving
too difficult. Bob: feel free to close this as a bad idea based on comment #0;
I'll open a new bug for my version at some point.
I suggest keeping the default behavior, but allowing the user to manually select "lock in on the current cert". Then, regardless if old/new cert is signed by a CA, the user gets a warning as soon as the cert changes. This warning would display if the CA changed, if the old cert was about to expire etc. If the old cert was valid for a long time and the CA changed, I would assume someone is messing around. If the old cert was nearly expired and the CA did not change, I would assume everything is all right and decide to save the new cert.
A new security logo with a different color should be displayed if the current site has been set to lock-in.
On pages not "locked in", the old behavior should be kept, automatically locking-in all sites does not seem sensible to me.
This thing might be an idea for a "paranoid ssl" extension as it would help only advanced users who know how ssl works.
*** Bug 481009 has been marked as a duplicate of this bug. ***
See also bug 471798 and bug 398721.
i tried to make the point in my bug 481009 that cookies should not be sent until that verification/consent is granted, if such dialog, consent, etc ever takes place.
Note that load-balanced server farms with a different certificate on each server are already problematic because PSM supports only one certificate override ("security exception") per expected hostname.
I tried using the Certificate Patrol add-on from https://addons.mozilla.org/en-US/firefox/addon/6415 . It warns you when a certificate changes. I found it unusable with my online bank. Its not just a load balanced server farm, almost every menu choice I made resulted in the certificate changing too (usually just a different organizational unit and s/n). I verified this wasn't a bug in the add-on and even called the bank to confirm it was supposed to do this.
Supposedly the Certificate Patrol authors will eventually add a green/yellow/red indicator that will be smart enough to handle that type of use case.
http://files.cloudprivacy.net/ssl-mitm.pdf (Certified Lies: Detecting and Defeating Government Interception Attacks against SSL) talks about a CertLock add-on that sounds like a good solution. One of the authors of the paper is Sid Stamm from Mozilla. However, his blog at http://blog.sidstamm.com/ doesn't appear to mention the add-ons status.
Idiot banks are problematic either way.
Precisely this is why there are SSL capable load balancers out there. Also they could use anycast and/or round-robin DNS for their servers. Right now I can't recally any serious technical hurdle regarding using just one domain and one certificate for it. Please correct me if I'm wrong.
You're not wrong. However its not clear to me that it makes sense to implement a partial solution if there might be two add-ons that provide a much better solution by the time it gets implemented.
If it only works with simple cases and you get false positives from more complex cases many users might want to disable it. I've disabled Certificate Patrol despite it working fine for the simple case you describe because I have little tolerance for false positives. Another issue is would it ever force a user to create a security exception (just like security alerts about domain name mismatches) to make a secure connection. The domain name mismatch was just a warning at one time too.
I think with so many potential user interface issues its better to use an add-on since it can be updated quicker, and the user has the option of uninstalling it if they don't like a policy decision it makes.
This is the same as bug 471798. I'm duping this way since the other bug appears more popular. Go ahead and flame me if this is not the right thing to do.
*** This bug has been marked as a duplicate of bug 471798 ***