Open Bug 1897630 Opened 2 months ago Updated 4 days ago

Entrust: Jurisdiction issue in some EV TLS & Code Signing certificates

Categories

(CA Program :: CA Certificate Compliance, task)

Tracking

(Not tracked)

ASSIGNED

People

(Reporter: ngook.kong, Assigned: ngook.kong)

Details

(Whiteboard: [ca-compliance] [ev-misissuance])

Attachments

(1 file)

User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36

During a scan of our certificate base with pkilint we discovered 74 certificates of which the jurisdiction locality was present and the jurisdiction state or province field missing. This is not a security issue.

Investigation showed that the verification was performed incorrectly and jurisdiction-locality should not have been included. Based on further investigation into the relevant Subscriber profiles, most of the affected certificates should only have contained jurisdiction-country (agency at the country level), and some of the certificates should have contained both jurisdiction-state and jurisdiction-country (agency at the state level). None of the certificates should have contained jurisdiction-locality information.

Although we are still investigating all the root causes, we believe that this may be a re-occurrence of or at least related to bug 1802916, and we believe that it is not a reoccurrence of bug 1867130 or bug 1696227. We will provide our conclusions on this in our full incident report.

Further, we have corrected all known errors and also taken some preventative measures to ensure that going forward, certificates no longer include jurisdiction-locality information that should not be present.

Note that the pkilint scan and subsequent investigation occurred while Entrust was investigating and responding to other incidents, which has resulted in delays in drafting and posting this preliminary report. We will create another incident report for not filing this preliminary report within the expected 72 hour timeframe.

All affected subscribers were advised on 16 May 2024 of the mis-issuance and requirement to revoke within 5 days.

We will provide a full incident report on or before May 24, 2024.

  • Bug 1867130 (Nov 2023): Jurisdiction Locality Wrong in EV Certificate

    • Jurisdiction locality field contained postal code
  • Bug 1802916 (Nov 2022): EV TLS Certificate incorrect jurisdiction

    • Place of business ST field was incorrectly used in the ST jurisdiction field

    • Place of business ST field was incorrectly used in the L jurisdiction

    • ST jurisdiction was used, when the registry was from the country level

  • Bug 1696227 (Mar 2021): Incorrect Jurisdiction Country Value in an EV Certificate

    • Jurisdiction country is set to the value of “ZA” when it should have been set to “BW”

Have you stopped issuance?

I appreciate the full incident report is being prepared, but can you tell us when this issue was confirmed internally?

Assignee: nobody → ngook.kong
Type: defect → task
Whiteboard: [ca-compliance] [ev-misissuance]
Status: UNCONFIRMED → ASSIGNED
Ever confirmed: true

(In reply to amir from comment #2)

Have you stopped issuance?

The certificate profiles had already been corrected by the time the incident was confirmed and thus we did not stop issuance. All affected certificates had expired or were revoked by May 21, 2024, 9:30 AM UTC.

Incident Report

Summary

During a scan of our certificate base with pkilint we discovered TLS EV certificates and Code Signing EV certificates in which the jurisdiction locality was present and the jurisdiction state or province field was missing. We initially determined that there were 74 affected certificates; upon further investigation we concluded that there were 101 TLS EV certificates and 5 Code Signing EV certificates affected.

Entrust ran our pkilint in early April 2024. The findings were shared with the product compliance team for further review and analysis, but the established process for escalating issues to senior management was not followed until May 12. On May 12, per Entrust policy and best practice, we assembled a team to investigate both the root cause of the issue itself, and the cause of the process lapses that resulted in this incident.

Our teams researched the issue thoroughly, and on May 16, confirmed that this was a mis-issuance. All affected subscribers were advised on that day of the mis-issuance and the requirement to revoke within five days. All affected certificates had expired or were revoked by May 21, 2024, 9:30 AM UTC.

We file delay report titled, Entrust: Delayed reporting of Jurisdiction issue in some EV TLS & Code Signing certificates bug 1898847

We file delay revocation titled, Entrust: Delayed revocation of certificates affected by Jurisdiction issue in some EV TLS & Code Signing certificates bug 1898848

Impact

This incident impacted 101 TLS EV certificates and 5 Code Signing EV certificates that were issued from 2021-06-23 to 2024-04-08.

  • 106 certificates were impacted by this issue
    • 56 certificates have been revoked in the required 5-day revocation timeframe
    • 47 certificates had been revoked previously
    • 3 certificates had expired

The 47 certificates that had been revoked previously were revoked in relation to bug 1883843 or by the customer due to other reasons.

Timeline

All times are UTC.

2021-03-03:

  • Bug 1696227 (Mar 2021): Incorrect Jurisdiction Country Value in an EV Certificate
    • This incident is about certificates where the jurisdiction country is set to the value of “ZA” when it should have been set to “BW”. While this is related to incorrect information in the jurisdiction data of the certificate, this does not seem to be directly related to this incident.

2022-11-28:

  • Bug 1802916 (Nov 2022): EV TLS Certificate incorrect jurisdiction
    • In this incident report we identified that the jurisdiction state or province was used when the registry was from the country level. We did not identify cases where the jurisdiction state or province was missing and/or the registry was also from the country level.

2023-03-14:

  • Dropdown functionality was implemented for Private Organizations as described in bug 1802916 comment 7.

2023-11-28:

  • Bug 1867130 (Nov 2023): Jurisdiction Locality Wrong in EV Certificate
    • This incident detected a postal code in the jurisdiction locality field for a government entity (which does not come from the drop-down list) and was caused by insufficient indication of changes.

2024-03-09:

  • An ad hoc scan with pkilint was run in relation to bug 1883843, as part of the preparation work to implement pkilint as a post-issuance linter. The report shared by the engineer was seen as confirming the known incident (bug 1883843) and was not reviewed or investigated further by the compliance team because it was just a list of errors without identifying the certificates. The team requested a report that would include certificate numbers. Unbeknownst to the compliance team, among the thousands of errors in the report, there were 42 errors relating to the locality error. The engineer working on this fix left the company and this task halted, and the escalation process was not followed.

2024-04-03:

  • 13:12 A new scan with pkilint was started; initial results (while the scan was still running) highlighted an error where the jurisdiction locality was present and the state or province was missing. The issue was escalated to our verification team for further investigation.
  • 19:50 Verification data indicated that the organization profiles had been validated at the country level and that the locality was not listed for these jurisdictions.

2024-04-04:

  • 12:42 A compliance team member reviewed the issue and determined that the locality information in the certificates was likely incorrect. The issue was discussed with the verification team who started a deeper investigation into why their data was not the same as in the certificates.
  • 15:00 The issue was discussed between compliance and the verification team resulting in the need for further investigation.

2024-04-08:

  • We detected a certificate that was issued with a government entity with the same issue, however the logic for government entities is different from private organizations. Where private organizations leverage our pre-verified jurisdiction list located at https://www.entrust.com/legal-compliance/approved-incorporating-agencies, government organizations are manual.

2024-04-11:

  • 10:05 pkilint is added as a post issuance linter in production.

2024-04-15:

  • 11:48 The problem was included in a report from a partial scan with pkilint during the implementation of the post-issuance linter. The results of this scan were included in the communication to the product compliance team manager and should have been escalated at that time through established processes. It was not escalated because the compliance team manager incorrectly assumed the data was reporting the cPSuri problem from bug 1883843 that was already being addressed.

2024-05-12:

  • 02:19 A member of the product compliance team (unprompted) identified that this issue had not been reported and actioned. Following process, senior leadership was informed, and our incident handling procedure initiated.

2024-05-13:

  • 13:00 The product compliance team manager formally started an investigation.

2024-05-16

  • 11:55 Mis-issuance confirmed and final certificate data verified.
  • 11:55 We started the 5-day revocation clock.
  • 16:00 Notified subscribers of the impacted certificates and that they would be revoked within 5 days.

2024-05-21

  • 09:30 All remaining impacted certificates were revoked.

Root Cause Analysis

  1. The missing jurisdiction state or province field was due to human error.
  2. We did not follow appropriate processes in triaging issues.
  3. Product compliance team not integrated with Entrust corporate organizations dedicated to broader compliance.

1. Why was there a problem?

We discovered certificates in which the jurisdiction locality was present and the jurisdiction state or province field was missing. Further investigation showed that the verification should not have included jurisdiction-locality.

2. Why was the verification performed incorrectly?

It can be difficult to determine if an Incorporating Agency or Registration Agency operates at a national, regional, or locality level in a particular country. While the updates to the Approved Incorporating Agencies need to be approved by a verification specialist and verification auditor, this step did not identify that some Incorporating Agencies should have been at the country or country and state level.

The Approved Incorporating Agencies were corrected as part of bug 1802916 and, on March 14, 2023, the verification system was updated to avoid this issue from re-occurring:

To ensure the correct jurisdiction data is in the EV certificate, when a verification specialist verifies or re-verifies an Organization, they will follow this process: 1) Select businessCategory = Private Organization, 2) Choose the country, 3) Choose a registry from the dropdown for the country, 4) Verify the organization is in the registry and record data including the registration number. Note by selecting the registry, the jurisdiction location information is selected in an automated fashion, which mitigates human error in selecting this data.

3. Why were some certificates still mis-issued after March 14, 2023?

After this solution went into production on March 14, 2023, the verification team updated certificate profiles based on the Approved Incorporating Agencies when these profiles were up for renewal. At that time, we believed that we had found all mis-issued certificates.

4. Why were these certificates not detected?

When we checked for mis-issued certificates for bug 1802916, we looked for specific patterns and updated the corresponding certificate profiles, which did not include a missing state or province where a locality was present. We did not appropriately investigate or escalate at key points.

5. Why did you not re-verify all certificate profiles?

We should have reviewed all certificate profiles where the Approved Incorporating Agency data was modified but we did not do so because this was not part of our procedure although it will be moving forward.

Lessons Learned

  • We need additional capacity to ensure that escalation processes are followed and to provide additional support to teams responsible for product compliance and verification.
  • Our validation tool still is subject to human error; although, we have made significant improvements to reduce human error, we need to develop a set of input validation tools that prevent invalid combinations of jurisdiction fields.

What went well

  • pkilint was implemented on April 11, 2024 and has not detected any further mis-issuances.
  • Once senior leadership was notified of the event, this mis-issuance was confirmed and the impacted certificates were revoked within 5 days as required.

What didn't go well

  • Although we have had several bugs related to jurisdiction as listed in the timeline, and we had implemented system changes to prevent these types of incidents in the future, these actions did not completely mitigate the issue.
  • We should have followed escalation processes to address the potential mis-issuance that was detected during ad-hoc scans with pkilint.
  • The work of an engineer who had left the company was not properly handed over, resulting in a delay in problem identification.

Where we got lucky

  • Some certificates had already been replaced and revoked due to remediation of bug 1883843.

Action Items

We have identified the following actions items and will consider this issue during the reflection on our recent incidents of which we will be publishing our report on or before June 7 to Mozilla and the community.

Action Item Kind Due Date
Review applicable policy and procedures Prevent June 28, 2024
Improve our internal problem reporting mechanism reported by internal staff Detect July 31, 2024
Reorganize product compliance and verification teams to provide additional organizational resources and oversight Prevent July 31, 2024
Implement additional input validation controls for verification Mitigate July 26, 2024
Implement pkilint as post-issuance linter Detect Done

Appendix

Details of affected certificates

Attached is the list of impacted certificates.

(In reply to Wayne from comment #3)

I appreciate the full incident report is being prepared, but can you tell us when this issue was confirmed internally?

Hi, Wayne. Apologies for the late reply. As you can see, we have just posted the full incident report for bug 1897630 with the timeline.

Having read this incident report I'm still not quite grasping when the issue was confirmed internally. An incomplete fix was pushed 2023-03-14 that only applied to new subscribers, not pre-existing subscribers getting a re-issued certificate? As part of the annual identity verification the profile is carried over and no validation of existing data occurs? Or it does but is part of a human process despite the fields no longer being allowed per the incomplete fix?

Here is my interpretation of the timeline, please correct as necessary:
2024-03-09 is when an 'ad-hoc' scan occurs, but you mentioned a team requested this report. I'll ignore the blame being left on a engineer who has left, but you mention the work was not properly handed over? It sounds like the team had the full report of issues generated by pkilint and did not actually read it beyond the summary page. In handling this incident did this report get re-read for any additional 'missed' issues?

2024-04-03 looks to be when the issue is, again, confirmed internally and escalated to the verification team. The next day the compliance team is involved, and allegedly a deeper investigation starts. I'm seeing no followup here, could you clarify as it seems processes were followed here.

2024-04-08 has the same issue with a Government Entity detected, and presumably this is part of this incident. I'm quite unclear as to why there was no incident raised for this and a follow-up at the time?

2024-04-15 is when we get another pkilint report, and then an acknowledgment that it was not escalated internally.

2024-05-12 has the issue identified, again, and this time the incident handling procedure is started.

2024-05-16 is when I start getting confused as you mentioned mis-issuance is confirmed here and a 5-day revocation clock started at 11:55. Then on 2024-05-21 at 09:30 we're told all certificates were revoked. Following Entrust's unique interpretation... they would have all been revoked within 5 days? Yet we have the delayed revocation incident raised in #1898848. Entrust seem to be internally confused over how they interpret this requirement in this incident report alone, clarification would be appreciated.

Given the above when did Entrust confirm the issue occurred internally for a single certificate?

Moving onto the Root Cause Analysis we run into blaming the issue on human error again. By my count above there are five separate disclosed occasions when the issue was detected this year and it was only the last one that caused the issue to be followed up on.

I'm still quite unclear as to why certificates were still mis-issued and when the profiles were updated on a per-subscriber basis. To take an expired example from the mis-issued certificate we have a notBefore date of 2023-05-16. This certificate then expired and a corrected one produced 2024-05-03. With this as an example can you walk me through when this was corrected internally?

If Entrust are including similar issues then consider the following:
#1712106 (May 2021): Invalid localityName
#1792231 (Sept 2022): TLS Certificate issued with an incorrect state or province

Flags: needinfo?(ngook.kong)

(In reply to Wayne from comment #8)

Having read this incident report I'm still not quite grasping when the issue was confirmed internally.

The incident was confirmed internally on May 16th 2024.

An incomplete fix was pushed 2023-03-14 that only applied to new subscribers, not pre-existing subscribers getting a re-issued certificate? As part of the annual identity verification the profile is carried over and no validation of existing data occurs? Or it does but is part of a human process despite the fields no longer being allowed per the incomplete fix?

The fix was applied to all Private Entities at time of verification or re-verification (annually). In this case, the verification specialist selected a drop down from the list of options, before they needed to manually apply the Locality, State/Province and Country. The existing validation is provided, but the new validation must use the new drop down that overrides the ability to insert the information incorrectly. Non-Private Entities still need to go through the manual Locality, State/Province and Country process.

Here is my interpretation of the timeline, please correct as necessary:
2024-03-09 is when an 'ad-hoc' scan occurs, but you mentioned a team requested this report. I'll ignore the blame being left on a engineer who has left, but you mention the work was not properly handed over? It sounds like the team had the full report of issues generated by pkilint and did not actually read it beyond the summary page.

As a correction, the team did not request a report. The engineer ran the scan as part of preparation work to implement pkilint as a post-issuance linter. The team was assessing the tool itself and the type of report it could generate. We were not seeking new data and had no reason to believe the report would reveal new errors. Also, the report was incomplete—it did not contain certificate numbers so the team was not able to easily double-check the data by referring back to a certificate. We were concerned that it might contain false positives as we had never used PKIlint before.

We noted the departure of the engineer in the spirit of transparency, responsibility for any subsequent action falls on Entrust, not the engineer.

In handling this incident did this report get re-read for any additional 'missed' issues?

No, this non-production scan/report was not re-read due to the deficiencies noted above. The subsequent scans/reports that occurred later in the timeline were improvements on the tool and produced more useful information. This early scan may not be relevant to this bug but we chose to disclose it in the spirit of transparency.

2024-04-03 looks to be when the issue is, again, confirmed internally and escalated to the verification team. The next day the compliance team is involved, and allegedly a deeper investigation starts. I'm seeing no follow up here, could you clarify as it seems processes were followed here.

Nothing was confirmed at this point, but yes, you are correct, the error highlighted by the scan was correctly escalated to verification team in accordance with processes, and on the following day discussions continued resulting in agreement that further investigation was needed. However, this further investigation did not start after that agreement, which was not in accordance with established processes. This process gap will be addressed by increased tracking and governance rigor.

2024-04-08 has the same issue with a Government Entity detected, and presumably this is part of this incident. I'm quite unclear as to why there was no incident raised for this and a follow-up at the time?

The potential issue with a Government Entity was detected based on the investigation that was started as result of the investigation initiated by the verification team on 2024-04-04. This investigation looked at other certificates and certificate profiles that had a missing state or province. Because the verification process for Government Entities is different from that used for private organizations, it was not obvious whether this was connected to the other data that related to certificates issued to private organizations. However, you are correct, this is another point in time when investigation should have been escalated but it was not.

2024-04-15 is when we get another pkilint report, and then an acknowledgment that it was not escalated internally.
2024-05-12 has the issue identified, again, and this time the incident handling procedure is started.
2024-05-16 is when I start getting confused as you mentioned mis-issuance is confirmed here and a 5-day revocation clock started at 11:55. Then
on 2024-05-21 at 09:30 we're told all certificates were revoked. Following Entrust's unique interpretation... they would have all been revoked within 5 days? Yet we have the delayed revocation incident raised in #1898848. Entrust seem to be internally confused over how they interpret this requirement in this incident report alone, clarification would be appreciated.

Correct, Entrust confirmed the mis-issuance on May 16th. We filed a preliminary incident report for it on May 18th, which was within 72 hours of the incident being confirmed. We revoked all the affected certificates within 5 days of the incident being confirmed. However, we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.

Given the above when did Entrust confirm the issue occurred internally for a single certificate?

We confirmed the mis-issuance for all the certificates on May 16th. We acknowledge that this confirmation should have occurred much earlier if not for the process lapse.

Moving onto the Root Cause Analysis we run into blaming the issue on human error again. By my count above there are five separate disclosed occasions when the issue was detected this year and it was only the last one that caused the issue to be followed up on.

I'm still quite unclear as to why certificates were still mis-issued and when the profiles were updated on a per-subscriber basis. To take an expired example from the mis-issued certificate we have a not Before date of 2023-05-16. This certificate then expired and a corrected one produced 2024-05-03. With this as an example can you walk me through when this was corrected internally?

The reasons for why the certificates were mis-issued are provided in bug 1897630. This bug is being used to describe the reasons why it took an unacceptably long time to escalate, investigate and ultimately confirm the mis-issuance. You are correct, we believe these delays were largely due to human error, and insufficient process governance and we’ve listed the action items that we intend to follow to address these issues.

If Entrust are including similar issues then consider the following:
#1712106 (May 2021): Invalid localityName

Thank you, we determined that this bug (relating to spelling mistakes) was not related.

#1792231 (Sept 2022): TLS Certificate issued with an incorrect state or province

Thank you, we determined that this bug (relating to incorrect state/province) was not related.

Flags: needinfo?(ngook.kong)

(In reply to ngook.kong from comment #9)

(In reply to Wayne from comment #8)

Having read this incident report I'm still not quite grasping when the issue was confirmed internally.

The incident was confirmed internally on May 16th 2024.

An incomplete fix was pushed 2023-03-14 that only applied to new subscribers, not pre-existing subscribers getting a re-issued certificate? As part of the annual identity verification the profile is carried over and no validation of existing data occurs? Or it does but is part of a human process despite the fields no longer being allowed per the incomplete fix?

The fix was applied to all Private Entities at time of verification or re-verification (annually). In this case, the verification specialist selected a drop down from the list of options, before they needed to manually apply the Locality, State/Province and Country. The existing validation is provided, but the new validation must use the new drop down that overrides the ability to insert the information incorrectly. Non-Private Entities still need to go through the manual Locality, State/Province and Country process.

This is why I included an example at the end as a fix was applied March 14 2023. The Incident Report notes: "the verification team updated certificate profiles based on the Approved Incorporating Agencies when these profiles were up for renewal". To that end can I get an explanation on the following scenario and how it came about:

I'm still quite unclear as to why certificates were still mis-issued and when the profiles were updated on a per-subscriber basis. To take an expired example from the mis-issued certificate we have a notBefore date of 2023-05-16. This certificate then expired and a corrected one produced 2024-05-03. With this as an example can you walk me through when this was corrected internally?

I would gather there has been a process error in actually updating on renewal? The 2023 certificate is 2 months after the fix was deployed so I doubt there is overlap on a re-issuance delay. If Entrust could explain what happened I would greatly appreciate it.

As a correction, the team did not request a report. The engineer ran the scan as part of preparation work to implement pkilint as a post-issuance linter. The team was assessing the tool itself and the type of report it could generate. We were not seeking new data and had no reason to believe the report would reveal new errors. Also, the report was incomplete—it did not contain certificate numbers so the team was not able to easily double-check the data by referring back to a certificate. We were concerned that it might contain false positives as we had never used PKIlint before.

Just to clarify is the correction on my interpretation, or a correction to the original report? The report lacking certificate numbers makes no sense to me as why this was not followed up. The lint would produce an error giving a defined case to search your certificate corpus for. But this mistake happened, so moving on.

2024-04-03 looks to be when the issue is, again, confirmed internally and escalated to the verification team. The next day the compliance team is involved, and allegedly a deeper investigation starts. I'm seeing no follow up here, could you clarify as it seems processes were followed here.

Nothing was confirmed at this point, but yes, you are correct, the error highlighted by the scan was correctly escalated to verification team in accordance with processes, and on the following day discussions continued resulting in agreement that further investigation was needed. However, this further investigation did not start after that agreement, which was not in accordance with established processes. This process gap will be addressed by increased tracking and governance rigor.

Okay so here we have a definitive point in time where the error surfaces again, and over multiple days discussions are had on further investigation. I would establish as this was when the incident was confirmed but the scope was unknown. In much the same way that a Certificate Problem Report timer starts on the initial communication happening, I would put forward that a linter throwing up an error and a human looking at it is when an issue is confirmed to exist. That future mis-issuances stopped 2024-04-08 is telling.

2024-04-08 has the same issue with a Government Entity detected, and presumably this is part of this incident. I'm quite unclear as to why there was no incident raised for this and a follow-up at the time?

The potential issue with a Government Entity was detected based on the investigation that was started as result of the investigation initiated by the verification team on 2024-04-04. This investigation looked at other certificates and certificate profiles that had a missing state or province. Because the verification process for Government Entities is different from that used for private organizations, it was not obvious whether this was connected to the other data that related to certificates issued to private organizations. However, you are correct, this is another point in time when investigation should have been escalated but it was not.

What I am saying is that there has not been an incident filled here reporting the error that Entrust noted internally on 2024-04-04. Could you clarify if this potential issue was a mis-issuance? Is it part of these impacted certificates, or an unrelated matter?

However, we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.

Is this following Entrust's incident handling practices? There have been previous statements to the effect that Entrust's incident handling practices have not changed in the past few months. Should every CA now file potential related incidents at the outset of an incident?

Moving onto the Root Cause Analysis we run into blaming the issue on human error again. By my count above there are five separate disclosed occasions when the issue was detected this year and it was only the last one that caused the issue to be followed up on.

I'm still quite unclear as to why certificates were still mis-issued and when the profiles were updated on a per-subscriber basis. To take an expired example from the mis-issued certificate we have a not Before date of 2023-05-16. This certificate then expired and a corrected one produced 2024-05-03. With this as an example can you walk me through when this was corrected internally?

The reasons for why the certificates were mis-issued are provided in bug 1897630. This bug is being used to describe the reasons why it took an unacceptably long time to escalate, investigate and ultimately confirm the mis-issuance. You are correct, we believe these delays were largely due to human error, and insufficient process governance and we’ve listed the action items that we intend to follow to address these issues.

I appreciate that Entrust are overloaded with incidents, but this is incident #1897630. The delayed reporting incident is #1898847. I reiterate my question.

Flags: needinfo?(ngook.kong)

(In reply to Wayne from comment #10)

(In reply to ngook.kong from comment #9)

(In reply to Wayne from comment #8)
I'm still quite unclear as to why certificates were still mis-issued and when the profiles were updated on a per-subscriber basis. To take an expired example from the mis-issued certificate we have a not Before date of 2023-05-16. This certificate then expired and a corrected one produced 2024-05-03. With this as an example can you walk me through when this was corrected internally?

I would gather there has been a process error in actually updating on renewal? The 2023 certificate is 2 months after the fix was deployed so I doubt there is overlap on a re-issuance delay. If Entrust could explain what happened I would greatly appreciate it.

To clarify, the fix implemented in March 2023 was a fix to the system for inputting/recording subscriber information during verification of new subscribers or re-verification of pre-existing subscribers; the fix did not go back and correct previously recorded data nor did it correct the certificates themselves.

With respect to the particular certificate that you identified,
1. Jan 2023: The annual re-verification of subscriber information (before the system fix was implemented.)
2. May 2023: When the certificate was renewed/re-issued, it used the information that had been recorded in the system in January.
3. March 2023 : The fix made in March 2023 generated a correction in how the information was recorded when it was up for re-verification in January 2024.
4. This updated information would appear in the certificate profile when it was renewed/re-issued in May 2024.

As a correction, the team did not request a report. The engineer ran the scan as part of preparation work to implement pkilint as a post-issuance linter. The team was assessing the tool itself and the type of report it could generate. We were not seeking new data and had no reason to believe the report would reveal new errors. Also, the report was incomplete—it did not contain certificate numbers so the team was not able to easily double-check the data by referring back to a certificate. We were concerned that it might contain false positives as we had never used PKIlint before.

Just to clarify is the correction on my interpretation, or a correction to the original report?

The response was intended to clarify our statement of what was requested and why.

2024-04-08 has the same issue with a Government Entity detected, and presumably this is part of this incident. I'm quite unclear as to why there was no incident raised for this and a follow-up at the time?

The potential issue with a Government Entity was detected based on the investigation that was started as result of the investigation initiated by the verification team on 2024-04-04. This investigation looked at other certificates and certificate profiles that had a missing state or province. Because the verification process for Government Entities is different from that used for private organizations, it was not obvious whether this was connected to the other data that related to certificates issued to private organizations. However, you are correct, this is another point in time when investigation should have been escalated but it was not.

What I am saying is that there has not been an incident filled here reporting the error that Entrust noted internally on 2024-04-04. Could you clarify if this potential issue was a mis-issuance? Is it part of these impacted certificates, or an unrelated matter?

We ultimately determined that the anomaly with the Government Entity certificate that was noted on 2024-04-04 was not a separate matter—it was part of this incident, and the certificate has been included as one of the impacted certificates.

However, we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.

Is this following Entrust's incident handling practices?

No, this was an ad hoc decision due to the current circumstances.

There have been previous statements to the effect that Entrust's incident handling practices have not changed in the past few months. Should every CA now file potential related incidents at the outset of an incident?

No, we do not believe that any CA should file potential related incidents at the onset of an incident, and that is not why we submitted delayed reporting and late revocation incident reports for this incident. These reports were specifically submitted because we believed it would be the expectation of the community, given the context and feedback we’ve been receiving. This is not part of a new practice, as noted above it was an ad hoc decision.

I'm still quite unclear as to why certificates were still mis-issued and when the profiles were updated on a per-subscriber basis. To take an expired example from the mis-issued certificate we have a not Before date of 2023-05-16. This certificate then expired and a corrected one produced 2024-05-03. With this as an example can you walk me through when this was corrected internally?

The reasons for why the certificates were mis-issued are provided in bug 1897630. This bug is being used to describe the reasons why it took an unacceptably long time to escalate, investigate and ultimately confirm the mis-issuance. You are correct, we believe these delays were largely due to human error, and insufficient process governance and we’ve listed the action items that we intend to follow to address these issues.

I appreciate that Entrust are overloaded with incidents, but this is incident #1897630. The delayed reporting incident is #1898847. I reiterate my question.

We believe the re-iterated question is the request to walk through the specific example of the certificate with the Not Before date of 2023-05-16, which expired and was replaced with a corrected certificate on 2024-05-3. This is provided further above in this Comment, but for convenience here it is again:

To clarify, the fix implemented in March 2023 was a fix to the system for inputting/recording subscriber information during verification of new subscribers or re-verification of pre-existing subscribers; the fix did not go back and correct previously recorded data nor did it correct the certificates themselves.

With respect to the particular certificate that you identified,
1. Jan 2023: The annual re-verification of subscriber information (before the system fix was implemented.)
2. May 2023: When the certificate was renewed/re-issued, it used the information that had been recorded in the system in January.
3. March 2023 : The fix made in March 2023 generated a correction in how the information was recorded when it was up for re-verification in January 2024.
4. This updated information would appear in the certificate d in May 2024.

Flags: needinfo?(ngook.kong)

(In reply to ngook.kong from comment #11)

To clarify, the fix implemented in March 2023 was a fix to the system for inputting/recording subscriber information during verification of new subscribers or re-verification of pre-existing subscribers; the fix did not go back and correct previously recorded data nor did it correct the certificates themselves.

With respect to the particular certificate that you identified,
1. Jan 2023: The annual re-verification of subscriber information (before the system fix was implemented.)
2. May 2023: When the certificate was renewed/re-issued, it used the information that had been recorded in the system in January.
3. March 2023 : The fix made in March 2023 generated a correction in how the information was recorded when it was up for re-verification in January 2024.
4. This updated information would appear in the certificate profile when it was renewed/re-issued in May 2024.

Thank you that was all I was asking for.

Just to clarify is the correction on my interpretation, or a correction to the original report?

The response was intended to clarify our statement of what was requested and why.

Okay it is a correction to the original report then.

2024-04-08 has the same issue with a Government Entity detected, and presumably this is part of this incident. I'm quite unclear as to why there was no incident raised for this and a follow-up at the time?

The potential issue with a Government Entity was detected based on the investigation that was started as result of the investigation initiated by the verification team on 2024-04-04. This investigation looked at other certificates and certificate profiles that had a missing state or province. Because the verification process for Government Entities is different from that used for private organizations, it was not obvious whether this was connected to the other data that related to certificates issued to private organizations. However, you are correct, this is another point in time when investigation should have been escalated but it was not.

What I am saying is that there has not been an incident filled here reporting the error that Entrust noted internally on 2024-04-04. Could you clarify if this potential issue was a mis-issuance? Is it part of these impacted certificates, or an unrelated matter?

We ultimately determined that the anomaly with the Government Entity certificate that was noted on 2024-04-04 was not a separate matter—it was part of this incident, and the certificate has been included as one of the impacted certificates.

Okay, so multiple confirmations occurred in regards to that specific subscriber incident and no incident was created on Bugzilla.

However, we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.

Is this following Entrust's incident handling practices?

No, this was an ad hoc decision due to the current circumstances.

There have been previous statements to the effect that Entrust's incident handling practices have not changed in the past few months. Should every CA now file potential related incidents at the outset of an incident?

No, we do not believe that any CA should file potential related incidents at the onset of an incident, and that is not why we submitted delayed reporting and late revocation incident reports for this incident. These reports were specifically submitted because we believed it would be the expectation of the community, given the context and feedback we’ve been receiving. This is not part of a new practice, as noted above it was an ad hoc decision.

So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. We are now at the stage that Entrust are making ad-hoc decisions on what incidents to create bug reports for? There is no attempt at consistency? Are the Root Programs being directed contacted for these 'incidents' as required?

I appreciate that Entrust are overloaded with incidents, but this is incident #1897630. The delayed reporting incident is #1898847. I reiterate my question.

We believe the re-iterated question is the request to walk through the specific example of the certificate with the Not Before date of 2023-05-16, which expired and was replaced with a corrected certificate on 2024-05-3. This is provided further above in this Comment, but for convenience here it is again:

Yes I was providing that question again with more clarity to ensure there were no communication issues. Thank you for addressing it.

Flags: needinfo?(ngook.kong)

(In reply to Wayne from comment #12)

(In reply to ngook.kong from comment #11)

To clarify, the fix implemented in March 2023 was a fix to the system for inputting/recording subscriber information during verification of new subscribers or re-verification of pre-existing subscribers; the fix did not go back and correct previously recorded data nor did it correct the certificates themselves.

With respect to the particular certificate that you identified,
1. Jan 2023: The annual re-verification of subscriber information (before the system fix was implemented.)
2. May 2023: When the certificate was renewed/re-issued, it used the information that had been recorded in the system in January.
3. March 2023 : The fix made in March 2023 generated a correction in how the information was recorded when it was up for re-verification in January 2024.
4. This updated information would appear in the certificate profile when it was renewed/re-issued in May 2024.

Thank you that was all I was asking for.

Just to clarify is the correction on my interpretation, or a correction to the original report?

The response was intended to clarify our statement of what was requested and why.

Okay it is a correction to the original report then.

2024-04-08 has the same issue with a Government Entity detected, and presumably this is part of this incident. I'm quite unclear as to why there was no incident raised for this and a follow-up at the time?

The potential issue with a Government Entity was detected based on the investigation that was started as result of the investigation initiated by the verification team on 2024-04-04. This investigation looked at other certificates and certificate profiles that had a missing state or province. Because the verification process for Government Entities is different from that used for private organizations, it was not obvious whether this was connected to the other data that related to certificates issued to private organizations. However, you are correct, this is another point in time when investigation should have been escalated but it was not.

What I am saying is that there has not been an incident filled here reporting the error that Entrust noted internally on 2024-04-04. Could you clarify if this potential issue was a mis-issuance? Is it part of these impacted certificates, or an unrelated matter?

We ultimately determined that the anomaly with the Government Entity certificate that was noted on 2024-04-04 was not a separate matter—it was part of this incident, and the certificate has been included as one of the impacted certificates.

Okay, so multiple confirmations occurred in regards to that specific subscriber incident and no incident was created on Bugzilla.

However, we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.

Is this following Entrust's incident handling practices?

No, this was an ad hoc decision due to the current circumstances.

There have been previous statements to the effect that Entrust's incident handling practices have not changed in the past few months. Should every CA now file potential related incidents at the outset of an incident?

No, we do not believe that any CA should file potential related incidents at the onset of an incident, and that is not why we submitted delayed reporting and late revocation incident reports for this incident. These reports were specifically submitted because we believed it would be the expectation of the community, given the context and feedback we’ve been receiving. This is not part of a new practice, as noted above it was an ad hoc decision.

So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. We are now at the stage that Entrust are making ad-hoc decisions on what incidents to create bug reports for? There is no attempt at consistency? Are the Root Programs being directed contacted for these 'incidents' as required?

I believe you are asking why we made an ad hoc decision to file a delayed revocation report at the outset of this incident. As we noted above, “we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.” While it may have been more consistent not to file a delayed revocation report alongside this mis-issuance report, given that the mis-issuance was confirmed and the affected certificates were revoked within the timeline required by the TLS Baseline Requirements, we wanted to be as candid and thorough as possible in explaining to the community that this bug should have been escalated and investigated thoroughly earlier in time.

I appreciate that Entrust are overloaded with incidents, but this is incident #1897630. The delayed reporting incident is #1898847. I reiterate my question.

We believe the re-iterated question is the request to walk through the specific example of the certificate with the Not Before date of 2023-05-16, which expired and was replaced with a corrected certificate on 2024-05-3. This is provided further above in this Comment, but for convenience here it is again:

Yes I was providing that question again with more clarity to ensure there were no communication issues. Thank you for addressing it.

Flags: needinfo?(ngook.kong)

(In reply to ngook.kong from comment #13)

So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. We are now at the stage that Entrust are making ad-hoc decisions on what incidents to create bug reports for? There is no attempt at consistency? Are the Root Programs being directed contacted for these 'incidents' as required?

I believe you are asking why we made an ad hoc decision to file a delayed revocation report at the outset of this incident. As we noted above, “we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.” While it may have been more consistent not to file a delayed revocation report alongside this mis-issuance report, given that the mis-issuance was confirmed and the affected certificates were revoked within the timeline required by the TLS Baseline Requirements, we wanted to be as candid and thorough as possible in explaining to the community that this bug should have been escalated and investigated thoroughly earlier in time.

My point is that during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols. When these incidents for potential non-incidents were created here, were they also sent to Root Programs privately as required? I'm sure they're as baffled as I am here.

Flags: needinfo?(ngook.kong)

(In reply to Wayne from comment #14)

(In reply to ngook.kong from comment #13)

So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. We are now at the stage that Entrust are making ad-hoc decisions on what incidents to create bug reports for? There is no attempt at consistency? Are the Root Programs being directed contacted for these 'incidents' as required?

I believe you are asking why we made an ad hoc decision to file a delayed revocation report at the outset of this incident. As we noted above, “we filed a delayed reporting incident and a late revocation incident in anticipation that the community would point out that Entrust should have confirmed the incident and revoked the certs earlier.” While it may have been more consistent not to file a delayed revocation report alongside this mis-issuance report, given that the mis-issuance was confirmed and the affected certificates were revoked within the timeline required by the TLS Baseline Requirements, we wanted to be as candid and thorough as possible in explaining to the community that this bug should have been escalated and investigated thoroughly earlier in time.

My point is that during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols. When these incidents for potential non-incidents were created here, were they also sent to Root Programs privately as required? I'm sure they're as baffled as I am here.

Acknowledged, and note that we disagree with your characterization of this. We do agree it would have been easier not to have filed a delayed revocation report in this instance.

Flags: needinfo?(ngook.kong)

(In reply to ngook.kong from comment #15)

My point is that during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols. When these incidents for potential non-incidents were created here, were they also sent to Root Programs privately as required? I'm sure they're as baffled as I am here.

Acknowledged, and note that we disagree with your characterization of this. We do agree it would have been easier not to have filed a delayed revocation report in this instance.

My question was quite specifically: When these incidents for potential non-incidents were created here, were they also sent to Root Programs privately as required?

Entrust are more than entitled to disagree with my characterization, I would suggest that if this is the case that they articulate why they feel such statements are inaccurate. On one hand we are being told that Entrust are sticking with their processes and procedures which have not changed for months. On the other we are now being told that this incident is being handled in an ad-hoc manner, which would specifically imply it is not following policy and procedures. Does this make sense?

Flags: needinfo?(ngook.kong)

7 days have gone without an answer to the above question.

(In reply to Wayne from comment #16)

(In reply to ngook.kong from comment #15)

My point is that during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols. When these incidents for potential non-incidents were created here, were they also sent to Root Programs privately as required? I'm sure they're as baffled as I am here.

Acknowledged, and note that we disagree with your characterization of this. We do agree it would have been easier not to have filed a delayed revocation report in this instance.

My question was quite specifically: When these incidents for potential non-incidents were created here, were they also sent to Root Programs privately as required?

We apologize for the oversight in getting this posted. We were focused on the revocation (Bug 1890898). Please accept our apologies.

We would like to address your question, but need to ensure we are accurate. Could you please provide the reference(s) to the specific requirements you are referencing?

Also, to clarify, we are not claiming that this bug is a non-incident or a potential non-incident. Are you saying that the issue we described in this bug is a non-incident?

Entrust are more than entitled to disagree with my characterization, I would suggest that if this is the case that they articulate why they feel such statements are inaccurate. On one hand we are being told that Entrust are sticking with their processes and procedures which have not changed for months. On the other we are now being told that this incident is being handled in an ad-hoc manner, which would specifically imply it is not following policy and procedures. Does this make sense?

We disagree with the following statements:
• From Comment 12: “So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. “
• From Comment 14: “during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols.”

We have been doing a thorough analysis of our own policies (and the gaps in those policies), as well as the applicable industry and Root Program very carefully.

We have committed to making specific formal changes, but in the meantime we are trying to be responsive and make improvements when and how we can, which does include some ad hoc decision making. It is not ideal but he motivation for changes and for the ad-hoc processes has been to attempt to meet the community’s expectations.

(In reply to Bruce Morton from comment #18)

We apologize for the oversight in getting this posted. We were focused on the revocation (Bug 1890898). Please accept our apologies.

I will take this statement to mean there is still improvements to be made in noting outstanding questions. I will also note that the response time is suddenly quite fast, the assertion that the revocation was the reason is far less likely that it just got missed - it's an honest mistake.

We would like to address your question, but need to ensure we are accurate. Could you please provide the reference(s) to the specific requirements you are referencing?

Does the specific requirements I am implying matter when at least one of them require it? To note in alphabetical order:
Apple are specific that a copy goes to them:

  1. Incidents
    Failure to comply with the above requirements in any way is considered an incident. CA providers must report such incidents to the Apple Root Program at certificate-authority-program@apple.com with a full incident report. This report can be shared directly or as a link from a public disclosure (e.g. Bugzilla).

Chrome are subjective dependant on a public report already existing:

  1. Reporting and Responding to Incidents
    ...
    If the Chrome Root Program Participant has not yet publicly disclosed an incident, they MUST notify chrome-root-program [at] google [dot] com and include an initial timeline for public disclosure. Chrome uses the information in the public disclosure as the basis for evaluating incidents.

Microsoft notably have no specific requirements

Mozilla are also subjective:

2.4 Incidents
When a CA operator fails to comply with any requirement of this policy - whether it be a misissuance, a procedural or operational issue, or any other variety of non-compliance - the event is classified as an incident and MUST be reported to Mozilla as soon as the CA operator is made aware. At a minimum, CA operators MUST promptly report all incidents to Mozilla in the form of an Incident Report that follows guidance provided on the CCADB website.

Also, to clarify, we are not claiming that this bug is a non-incident or a potential non-incident. Are you saying that the issue we described in this bug is a non-incident?

I am quite specifically not commenting on this particular incident #1897630, but on the alleged incidents attached to it. As the timeline of both this incident and incident #1898848 show by Entrust's own metrics there are 0 outstanding certificates. A delayed revocation incident was nevertheless raised.

My question was quite simple: When this incident was raised were it and its related incidents also sent to the Root Programs that require it? Given the timeline it is possible more than one Root Program were involved, and the substance is not involved. It is just a simple yes/no question, there is no need to waste more words than that on it.

We disagree with the following statements:

  • From Comment 12: “So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. “
  • From Comment 14: “during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols.”

I will note that the quotes from Comment 12, and Comment 14 came out due to this statement by Entrust:

These reports were specifically submitted because we believed it would be the expectation of the community, given the context and feedback we’ve been receiving.

Therefore the ad-hoc decision to create the reports did not come from any policy. I will note that this is not a statement that should cause contention, to further quote Entrust:

Is this following Entrust's incident handling practices?
No, this was an ad hoc decision due to the current circumstances.

We have been doing a thorough analysis of our own policies (and the gaps in those policies), as well as the applicable industry and Root Program very carefully.

We have committed to making specific formal changes, but in the meantime we are trying to be responsive and make improvements when and how we can, which does include some ad hoc decision making. It is not ideal but he motivation for changes and for the ad-hoc processes has been to attempt to meet the community’s expectations.

Furthermore it is substantiated with this comment. We have a disagreement on if ad-hoc decisions should occur at this point, but that is not a matter suitable for discussion here.

Flags: needinfo?(bruce.morton)

Action Items

Action Item Kind Due Date
Review applicable policy and procedures Prevent Done
Improve our internal problem reporting mechanism reported by internal staff Detect July 31, 2024
Reorganize product compliance and verification teams to provide additional organizational resources and oversight Prevent July 31, 2024
Implement additional input validation controls for verification Mitigate July 26, 2024
Implement pkilint as post-issuance linter Detect Done
Flags: needinfo?(bruce.morton)

(In reply to Wayne from comment #19)

(In reply to Bruce Morton from comment #18)

We apologize for the oversight in getting this posted. We were focused on the revocation (Bug 1890898). Please accept our apologies.

I will take this statement to mean there is still improvements to be made in noting outstanding questions. I will also note that the response time is suddenly quite fast, the assertion that the revocation was the reason is far less likely that it just got missed - it's an honest mistake.

We would like to address your question, but need to ensure we are accurate. Could you please provide the reference(s) to the specific requirements you are referencing?

Does the specific requirements I am implying matter when at least one of them require it? To note in alphabetical order:
Apple are specific that a copy goes to them:

  1. Incidents
    Failure to comply with the above requirements in any way is considered an incident. CA providers must report such incidents to the Apple Root Program at certificate-authority-program@apple.com with a full incident report. This report can be shared directly or as a link from a public disclosure (e.g. Bugzilla).

Chrome are subjective dependant on a public report already existing:

  1. Reporting and Responding to Incidents
    ...
    If the Chrome Root Program Participant has not yet publicly disclosed an incident, they MUST notify chrome-root-program [at] google [dot] com and include an initial timeline for public disclosure. Chrome uses the information in the public disclosure as the basis for evaluating incidents.

Microsoft notably have no specific requirements

Mozilla are also subjective:

2.4 Incidents
When a CA operator fails to comply with any requirement of this policy - whether it be a misissuance, a procedural or operational issue, or any other variety of non-compliance - the event is classified as an incident and MUST be reported to Mozilla as soon as the CA operator is made aware. At a minimum, CA operators MUST promptly report all incidents to Mozilla in the form of an Incident Report that follows guidance provided on the CCADB website.

Also, to clarify, we are not claiming that this bug is a non-incident or a potential non-incident. Are you saying that the issue we described in this bug is a non-incident?

I am quite specifically not commenting on this particular incident #1897630, but on the alleged incidents attached to it. As the timeline of both this incident and incident #1898848 show by Entrust's own metrics there are 0 outstanding certificates. A delayed revocation incident was nevertheless raised.

My question was quite simple: When this incident was raised were it and its related incidents also sent to the Root Programs that require it? Given the timeline it is possible more than one Root Program were involved, and the substance is not involved. It is just a simple yes/no question, there is no need to waste more words than that on it.

No. We failed to notify Apple and Microsoft. We will address this in the future.

We disagree with the following statements:

  • From Comment 12: “So at no point was adherence to Entrust's own policies, CCADB's own policies, nor the Root Programs' considered. “
  • From Comment 14: “during this incident response we are being expressly told that Entrust are not sticking to any procedures. Ad-hoc processes are being used when it is in Entrust's best interests to show a strict adherence to their internal protocols.”

I will note that the quotes from Comment 12, and Comment 14 came out due to this statement by Entrust:

These reports were specifically submitted because we believed it would be the expectation of the community, given the context and feedback we’ve been receiving.

Therefore the ad-hoc decision to create the reports did not come from any policy. I will note that this is not a statement that should cause contention, to further quote Entrust:

Is this following Entrust's incident handling practices?
No, this was an ad hoc decision due to the current circumstances.

We have been doing a thorough analysis of our own policies (and the gaps in those policies), as well as the applicable industry and Root Program very carefully.

We have committed to making specific formal changes, but in the meantime we are trying to be responsive and make improvements when and how we can, which does include some ad hoc decision making. It is not ideal but he motivation for changes and for the ad-hoc processes has been to attempt to meet the community’s expectations.

Furthermore it is substantiated with this comment. We have a disagreement on if ad-hoc decisions should occur at this point, but that is not a matter suitable for discussion here.

Flags: needinfo?(ngook.kong)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: