Open Bug 2004699 Opened 2 months ago Updated 4 hours ago

Netlock: CA in AIA in PEM format

Categories

(CA Program :: CA Certificate Compliance, task)

Tracking

(Not tracked)

ASSIGNED

People

(Reporter: kaluha.roland, Assigned: kaluha.roland)

Details

(Whiteboard: [ca-compliance] [policy-failure])

Preliminary Incident Report

Summary

  • Incident description:
    A third party reported that the intermediate certificate
    C=HU, L=Budapest, O=NETLOCK Ltd., CN=NETLOCK Trust CA
    with fingerprint 58d7a197f09a6ea552b8ea6b1a53185a030a3ad8d52220c00c44e3f450e4fb90
    contains an Authority Information Access (AIA) – CA Issuers URI that points to an HTTP resource returning a PEM-encoded certificate.
    According to RFC 5280 §4.2.2.1, AIA HTTP URIs must provide a DER-encoded certificate or a DER/BER CMS “certs-only” container, therefore serving PEM content does not comply with the standard.

  • Relevant policies:

    • RFC 5280 §4.2.2.1 requirements for AIA / CA Issuers endpoints and encoding.
    • CCADB incident reporting guidelines requiring disclosure of any non-compliance affecting CA certificates.
  • Source of incident disclosure:
    Third Party Reported.

Assignee: nobody → kaluha.roland
Status: UNCONFIRMED → ASSIGNED
Ever confirmed: true
Summary: der format → Netlock: CA in AIA in PEM format
Whiteboard: [ca-compliance] [policy-failure]

I am observing that this is being yet another instance where NetLock is not providing the satisfactory incident reports in the Bugzilla. You are not providing the full report within 14 days of the notification as the CCADB policy is requiring. Even more concerning is that you have not even for to fix all of the non-conformities which are being active.

The following URIs are still for to return the certificates in the PEM encoding, which is violating the RFC 5280 Section 4.2.2.1 :

There is being a clear and well-established pattern which is demonstrating that this Certificate Authority is not capable for to follow the Baseline Requirements, or perhaps is feeling that it is not needing for to do so. This is continuing without the improvement.

In the Bug 1904041 you were giving the promise for to improve the 'six-eyes' review. I must ask: Why did these six eyes not for to see that the AIA is in the PEM format? Are these eyes only for to look on the text of the certificate and not for to test if the links are actually functioning according to the RFC? The community is wanting for to see the technical 'blocking' controls, not the human promises.

I am requesting, as it has been done in the other bugs, that you are providing the specific audit workpapers or the statements of your auditor for the years 2023, 2024, and 2025. It must be explaining how these URIs were verified during the audit cycles. If the auditor was not finding that the certificates are PEM, then we must for to wonder if the auditor is having the competence for the evaluation of the public trust. Why did they not find this sooner?

Flags: needinfo?(kaluha.roland)

Full Incident Report

Summary

  • Incident description:
    NETLOCK identified a non-compliance where several Authority Information Access (AIA) HTTP endpoints returned issuer certificates in PEM encoding instead of DER encoding, violating RFC 5280 Section 4.2.2.1.

    TLS certificates are issued from the following issuing CA hierarchies:

    TLS certificates are not issued from the following issuing CA hierarchy:

  • Timeline summary:

    • Non-compliance start dates (per AIA URI):
      • DVCA: 2021-08-25
      • pdvca: 2019-02-20
      • trustev3: 2020-02-10
      • qtrustev3: 2020-02-10
    • Non-compliance identified date: 2025-12-06
    • Non-compliance end date: Planned 2026-01-09
  • Relevant policies:

    • RFC 5280 Section 4.2.2.1
    • CCADB Incident Reporting Requirements
    • CA/Browser Forum Baseline Requirements (general compliance expectations)
  • Source of incident disclosure:
    External community report via Mozilla Bugzilla


Impact

  • Total number of certificates: all end-entity certificates directly affected
  • Total number of "remaining valid" certificates: all
  • Affected certificate types:
    Issuer (CA) certificates referenced via AIA
  • Incident heuristic:
    Standards compliance failure (AIA encoding)
  • Was issuance stopped in response to this incident, and why or why not?:
    No. Although TLS certificates are issued from some affected hierarchies, the issue impacts only the encoding of issuer certificates served via AIA and does not affect certificate issuance, validity, or cryptographic security.
  • Analysis:
    The incident did not result in mis-issuance, invalid certificates, or key compromise. The impact is limited to RFC 5280 compliance for relying parties that retrieve issuer certificates via AIA and strictly enforce DER encoding.
  • Additional considerations:
    The affected AIA endpoints serve issuer certificates only; end-entity TLS certificates remain unaffected.

Timeline

  • 2019-02-20: Non-compliant AIA configuration introduced for pdvca.
  • 2020-02-10: Non-compliant AIA configuration introduced for trustev3 and qtrustev3.
  • 2021-08-25: Non-compliant AIA configuration introduced for DVCA.
  • 2025-12-06: External community report received identifying the AIA encoding non-compliance.
  • 2025-12-08: Bugzilla bug filed by NETLOCK.
  • 2025-12-08: Internal investigation initiated.
  • 2025-12-08: Technical staff opened an internal defect tracking ticket.
  • 2025-12-09: Affected RSA issuing CA hierarchies identified and mapped.
  • 2025-12-15: Affected ECC issuing CA hierarchies identified and mapped.
  • 2025-12-19: Developers completed the required software release implementing the fix.
  • 2025-12-30: The release will be demonstrated and validated in the appropriate demo environment (planned).
  • 2026-01-09: Planned production release deployment.

Related Incidents

No related incidents.


Root Cause Analysis

Contributing Factor #1: Legacy AIA service configuration

  • Description:
    AIA HTTP services were configured to return issuer certificates in PEM format instead of DER format.
  • Timeline:
    Introduced between 2019 and 2021 depending on the issuing hierarchy.
  • Detection:
    Detected through external community reporting.
  • Interaction with other factors:
    Insufficient automated compliance checks allowed the issue to persist.
  • Root Cause Analysis methodology used:
    Retrospective technical configuration review.

Contributing Factor #2: Insufficient automated compliance controls

  • Description:
    No automated validation existed to verify RFC 5280 compliance of AIA endpoint responses.
  • Timeline:
    Ongoing until external disclosure.
  • Detection:
    Identified during internal investigation.
  • Interaction with other factors:
    Reliance on manual review processes.
  • Root Cause Analysis methodology used:
    Compliance control gap analysis.

Lessons Learned

  • What went well:
    The issue was promptly investigated once reported.
  • What didn’t go well:
    The non-compliance was not detected internally for several years.
  • Where we got lucky:
    The issue did not impact end-entity certificates or cryptographic security.
  • Additional:
    Automated technical controls are required in addition to human review.

Action Items

Action Item Kind Corresponding Root Cause(s) Evaluation Criteria Due Date Status
Correct AIA endpoints to serve DER Corrective Root Cause #1 DER verified by automated test 2026-01-09 In Progress
Implement automated AIA validation Prevent Root Cause #2 Monitoring alerts enabled 2026-02-15 Planned
Update internal compliance checklists Prevent Root Cause #2 Checklist approved and in operational use 2026-01-31 In Progress
Clarify auditor scope for AIA endpoint checks Prevent Root Cause #2 Audit evidence updated 2026-06-30 Planned

Appendix

Flags: needinfo?(kaluha.roland)

Dear Community Members,

we would like to provide a brief update regarding the incident timeline.

Timeline update

  • 2025-12-30: The release was demonstrated and validated in the appropriate demo environment.
  • 2026-01-09: The production release deployment is still planned for 2026-01-09.

We will update this Bugzilla entry again after the production deployment has been completed.

Dear Community Members,

we would like to provide a further update regarding the incident progress.

As planned, the remediation activities proceeded according to schedule, and the production deployment was successfully completed on 2026-01-08.

We will continue to monitor the situation and will prepare and publish the closure summary shortly. We will update this Bugzilla entry once the closure summary is available.

Hi Roland,

We still observe PEM encoded content being served from a few endpoints:

$ curl http://aia3.netlock.hu/index.cgi?ca=gold
-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG
EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3
MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl
cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR
dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB
pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM
b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm
aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz
IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT
lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz
AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5
VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG
ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2
BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG
AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M
U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh
bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C
+C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC
bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F
uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2
XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E=
-----END CERTIFICATE-----

$ curl http://aia2.netlock.hu/index.cgi?ca=gold
-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG
EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3
MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl
cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR
dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB
pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM
b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm
aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz
IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT
lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz
AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5
VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG
ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2
BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG
AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M
U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh
bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C
+C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC
bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F
uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2
XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E=
-----END CERTIFICATE-----

$ curl http://aia1.netlock.hu/index.cgi?ca=gold
-----BEGIN CERTIFICATE-----
MIIEFTCCAv2gAwIBAgIGSUEs5AAQMA0GCSqGSIb3DQEBCwUAMIGnMQswCQYDVQQG
EwJIVTERMA8GA1UEBwwIQnVkYXBlc3QxFTATBgNVBAoMDE5ldExvY2sgS2Z0LjE3
MDUGA1UECwwuVGFuw7pzw610dsOhbnlraWFkw7NrIChDZXJ0aWZpY2F0aW9uIFNl
cnZpY2VzKTE1MDMGA1UEAwwsTmV0TG9jayBBcmFueSAoQ2xhc3MgR29sZCkgRsWR
dGFuw7pzw610dsOhbnkwHhcNMDgxMjExMTUwODIxWhcNMjgxMjA2MTUwODIxWjCB
pzELMAkGA1UEBhMCSFUxETAPBgNVBAcMCEJ1ZGFwZXN0MRUwEwYDVQQKDAxOZXRM
b2NrIEtmdC4xNzA1BgNVBAsMLlRhbsO6c8OtdHbDoW55a2lhZMOzayAoQ2VydGlm
aWNhdGlvbiBTZXJ2aWNlcykxNTAzBgNVBAMMLE5ldExvY2sgQXJhbnkgKENsYXNz
IEdvbGQpIEbFkXRhbsO6c8OtdHbDoW55MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAxCRec75LbRTDofTjl5Bu0jBFHjzuZ9lk4BqKf8owyoPjIMHj9DrT
lF8afFttvzBPhCf2nx9JvMaZCpDyD/V/Q4Q3Y1GLeqVw/HpYzY6b7cNGbIRwXdrz
AZAj/E4wqX7hJ2Pn7WQ8oLjJM2P+FpD/sLj916jAwJRDC7bVWaaeVtAkH3B5r9s5
VA1lddkVQZQBr17s9o3x/61k/iCa11zr/qYfCGSji3ZVrR47KGAuhyXoqq8fxmRG
ILdwfzzeSNuWU7c5d+Qa4scWhHaXWy+7GRWF+GmF9ZmnqfI0p6m2pgP8b4Y9VHx2
BJtr+UBdADTHLpl1neWIA6pN+APSQnbAGwIDAKiLo0UwQzASBgNVHRMBAf8ECDAG
AQH/AgEEMA4GA1UdDwEB/wQEAwIBBjAdBgNVHQ4EFgQUzPpnk/C2uNClwB7zU/2M
U9+D15YwDQYJKoZIhvcNAQELBQADggEBAKt/7hwWqZw8UQCgwBEIBaeZ5m8BiFRh
bvG5GK1Krf6BQCOUL/t1fC8oS2IkgYIL9WHxHG64YTjrgfpioTtaYtOUZcTh5m2C
+C8lcLIhJsFyUR+MLMOEkMNaj7rP9KdlpeuY0fsFskZ1FSNqb4VjMIDw1Z4fKRzC
bLBQWV2QWzuoDTDPv31/zvGdg73JRm4gpvlhUbohL3u+pRVjodSVh/GeufOJ8z2F
uLjbvrW5KfnaNwUASZQDhETnv0Mxz3WLJdH0pmT1kvarBes96aULNmLazAZfNou2
XjG4Kvte9nHfRCaexOYNkbQudZWAUWpLMKawYqGT8ZvYzsRjdT9ZR7E=
-----END CERTIFICATE-----

Can you please help us understand these observations against the expectations of RFC 5280, and the subject of this bug?

Flags: needinfo?(kaluha.roland)

(In reply to Roland from comment #2)

Full Incident Report

As has been stated in previous incidents:

Please review the CCADB Incident Response Guidelines, particularly the sections on required templates and timelines.

In addition to not following the template:

Are there examples of “bad” practices?
Claims that are subjective, unqualified opinions, speculative, or impossible to substantiate:

  • Avoid making claims that are speculative or that cannot be corroborated (e.g. “there is no security impact due to this issue.”)

I appreciate this is boilerplate text for NETLOCK incidents, but it has been bad practice for years.

Hello.

I noticed that the “Related Incidents” section states that there are no related bugs. This issue appears similar, if not identical, to several previously filed incidents involving other CAs:

Bug Date Description
1884461 2024‑03‑08 CA certificates not published in DER‑encoded format
1914466 2024‑08‑22 CA certificates not published in DER‑encoded format
1914893 2024‑08‑26 CRL not DER‑encoded
1938167 2024‑12‑18 CRL not published in DER‑encoded format

Feedback Requested 1:
Please explain why you believe this incident is not related to the above bugs. If you do consider them related, it would be helpful to understand why they were not identified during preparation of the Full Incident Report.

Given the similarity to other incidents disclosed last year, I also noticed that this report was Third‑Party Reported. This raises a question about whether your internal processes include monitoring of incident reports from the broader community. Public disclosures are intended to help all CA operators learn from each other’s findings.

Feedback Requested 2:
Please confirm whether your internal processes include reviewing other CAs’ incident reports. If so, a brief overview of how you track these reports and incorporate relevant lessons would be helpful. It would also be useful to understand why this issue was identified through a third‑party report rather than through your own monitoring of other incidents to learn the lessons and incorporate within your environment.

Feedback Requested 3:
Please update the Full Incident Report if any sections, including Related Incidents or Action Items, require revision based on the above.

Thank you.

Dear community members,

thank you for your feedback; we are updating our previous report.

Full Incident Report

Summary

Incident description:
NETLOCK identified a non-compliance where several Authority Information Access (AIA) HTTP endpoints returned issuer certificates in PEM encoding instead of DER encoding, contrary to the requirements of RFC 5280 Section 4.2.2.1.

TLS certificates are issued from the following issuing CA hierarchies:

TLS certificates are not issued from the following issuing CA hierarchy:

Timeline summary:

Non-compliance start dates (per AIA URI):

  • DVCA: 2021-08-25

  • pdvca: 2019-02-20

  • trustev3: 2020-02-10

  • qtrustev3: 2020-02-10

  • Non-compliance identified date: 2025-12-06

  • Non-compliance end date: In progress

Relevant policies:

  • RFC 5280 Section 4.2.2.1
  • CCADB Incident Reporting Requirements
  • CA/Browser Forum Baseline Requirements (general compliance expectations)

Source of incident disclosure:
External community report via Mozilla Bugzilla


Impact

  • Total number of certificates: All end-entity certificates chaining to the affected issuers
  • Total number of "remaining valid" certificates: All
  • Affected certificate types:
    Issuer (CA) certificates referenced via AIA
  • Incident heuristic:
    Standards compliance failure (AIA encoding)
  • Was issuance stopped in response to this incident, and why or why not?:
    No. Although TLS certificates are issued from some affected hierarchies, the issue relates to the encoding of issuer certificates served via AIA and does not involve the certificate issuance process or cryptographic material.
  • Analysis:
    No mis-issuance, invalid certificates, or key compromise has been identified by NETLOCK as part of this investigation.
    The non-compliance relates to RFC 5280 requirements for AIA encoding and may affect relying parties that retrieve issuer certificates via AIA and strictly enforce DER encoding.
  • Additional considerations:
    The affected AIA endpoints serve issuer certificates only. No impact on the contents or issuance process of end-entity TLS certificates was identified during the investigation.

Timeline

  • 2019-02-20: Non-compliant AIA configuration introduced for pdvca.
  • 2020-02-10: Non-compliant AIA configuration introduced for trustev3 and qtrustev3.
  • 2021-08-25: Non-compliant AIA configuration introduced for DVCA.
  • 2025-12-06: External community report received identifying the AIA encoding non-compliance.
  • 2025-12-08: Bugzilla bug filed by NETLOCK.
  • 2025-12-08: Internal investigation initiated.
  • 2025-12-08: Technical staff opened an internal defect tracking ticket.
  • 2025-12-09: Affected RSA issuing CA hierarchies identified and mapped.
  • 2025-12-15: Affected ECC issuing CA hierarchies identified and mapped.
  • 2025-12-19: Developers completed the initial software release implementing the remediation.
  • 2025-12-30: The release was demonstrated and validated in the appropriate demo environment.
  • 2026-01-08: Production deployment was completed.
  • 2026-01-12: NETLOCK staff observed, and an external observer also reported, that PEM-encoded content was still being served from a limited number of AIA endpoints.
  • 2026-01-30: Development work for deployment of an additional corrective fix.
  • 2026-02-06: Planned testing and validation of the corrective fix.
  • 2026-02-12: Planned deployment of the corrective fix to the production environment.

Related Incidents

NETLOCK has reviewed its incident records and confirms that no related incidents have occurred within its CA operations.


Root Cause Analysis

Contributing Factor #1: Legacy AIA service configuration

  • Description:
    AIA HTTP services were configured to return issuer certificates in PEM format instead of DER format.
  • Timeline:
    Introduced between 2019 and 2021 depending on the issuing hierarchy.
  • Detection:
    Detected through external community reporting.
  • Interaction with other factors:
    Insufficient automated compliance checks allowed the issue to persist.
  • Root Cause Analysis methodology used:
    Retrospective technical configuration review.

Contributing Factor #2: Insufficient automated compliance controls

  • Description:
    No automated validation existed to verify RFC 5280 compliance of AIA endpoint responses.
  • Timeline:
    Ongoing until external disclosure.
  • Detection:
    Identified during internal investigation.
  • Interaction with other factors:
    Reliance on manual review processes.
  • Root Cause Analysis methodology used:
    Compliance control gap analysis.

Contributing Factor #3: Incomplete remediation coverage

  • Description:
    The initial remediation did not fully cover all AIA endpoints serving issuer certificates.
  • Timeline:
    Identified after the initial production deployment.
  • Detection:
    Detected through internal verification and external observation.
  • Interaction with other factors:
    Endpoint-level differences and insufficient post-deployment validation contributed to incomplete remediation.
  • Root Cause Analysis methodology used:
    Post-deployment verification and gap analysis.

Lessons Learned

  • What went well:
    The issue was promptly investigated once reported and transparently communicated.
  • What didn’t go well:
    The non-compliance was not detected internally for several years, and the initial remediation did not fully resolve the issue.
  • Where we got lucky:
    Based on the investigation, no issues were identified with end-entity certificates or cryptographic material.
  • Additional:
    Comprehensive post-deployment validation and automated technical controls are required in addition to human review.

Action Items

Action Item Kind Corresponding Root Cause(s) Evaluation Criteria Due Date Status
Correct AIA endpoints to serve DER (initial) Corrective Root Cause #1 DER verified by automated test 2026-01-08 Completed
Deploy additional corrective fix Corrective Root Cause #3 All endpoints verified as DER 2026-02-12 Planned
Implement automated AIA validation Prevent Root Cause #2 Monitoring alerts enabled 2026-02-15 Planned
Update internal compliance checklists Prevent Root Cause #2 Checklist approved and in operational use 2026-01-31 In Progress
Clarify auditor scope for AIA endpoint checks Prevent Root Cause #2 Audit evidence updated 2026-06-30 Planned

Appendix

Affected AIA URIs and TLS usage:

Flags: needinfo?(kaluha.roland)

Dear Community Members,

regarding Related Incidents, NETLOCK limits this section to incidents that occurred within NETLOCK’s own operations. This is because we can only make factual and accountable statements based on incidents that happened in our environment and for which we have full and verified information. We do not have visibility into the internal systems, processes, or root causes of other CAs, and therefore cannot reliably assess whether incidents at other organizations are directly related beyond a superficial similarity.
With respect to monitoring community disclosures, NETLOCK’s PKI subject matter experts actively follow incident reports and discussions within the broader CA/Browser community. However, we do not maintain a comprehensive, long-term internal database that fully processes, classifies, and retains all other CAs’ Bugzilla incidents over multiple years. Team composition also evolves over time, and during onboarding it is not feasible for staff to gain detailed familiarity with every historical incident reported by other CAs.
That said, we appreciate the good-faith and constructive nature of the feedback. In response, NETLOCK is initiating a program to establish an internal knowledge base that will be usable by both development and operations teams. This knowledge base will aim to collect and make searchable lessons learned, recurring issues, and remediation approaches derived from incidents affecting CAs similar to NETLOCK. While such a knowledge base can support awareness and learning, it does not imply that external incidents can be applied one-to-one to NETLOCK’s specific architecture, processes, or controls.

The CCADB Incident Reporting Guidelines require:

"Related Incidents MUST consider incidents beyond those corresponding to the CA Owner subject of this report."

NETLOCK's response in Comment 9:

"NETLOCK limits this section to incidents that occurred within NETLOCK's own operations."

The justification that NETLOCK lacks "visibility into the internal systems, processes, or root causes of other CAs" is not compatible with the IRGs. The IRGs do not require analysis of other CAs’ internal systems; they require identifying publicly documented incidents relevant to understanding the current issue. NETLOCK's interpretation indicates a gap in its processes for monitoring and incorporating community incident reports, and therefore constitutes a separate process‑compliance issue that must be tracked independently.

NETLOCK: Please update the Related Incidents section to include any missing related incidents. In addition, please file a new incident documenting this IRG non‑compliance, including a root‑cause analysis and Action Items describing the process changes needed to ensure future compliance.

Dear Community Members,

The action items outlined above are being executed in accordance with the implementation plan and schedule.

We will provide formal updates to the Mozilla community regarding completed actions and relevant milestones, ensuring transparency and accountability throughout the process.


We would like to respond by clarifying our position and by emphasizing the importance of a constructive, good-faith community dynamic.
NETLOCK values the Mozilla community and has consistently engaged with it in the spirit of transparency and continuous improvement. For this reason, we believe it is essential that community interactions remain supportive and collaborative. The community functions best when it operates as a partner in improvement rather than as an enforcer by tone alone.
We continue to maintain that NETLOCK does not, and cannot, have detailed visibility into the internal operations of other Certification Authorities. Such internal processes are legitimately protected as business secrets. Furthermore, we cannot be expected to fully understand or accurately assess the diverse operational challenges CAs face across different geographies, including but not limited to local regulatory frameworks, supervisory authorities, labor market constraints, and jurisdiction-specific operational risks. These factors materially affect day-to-day operations, yet they are inherently opaque to external parties.
We are willing to collect and reference publicly documented, relevant Bugzilla incidents and to update our report accordingly. However, we believe it is important to recognize that drawing conclusions without sufficient contextual knowledge risks producing an incomplete, distorted, or misleading picture. This limitation should be acknowledged when evaluating related incidents originating from fundamentally different operational environments.
Our firm view is that the community’s greatest value lies in a genuinely helpful, practical, and operations-supportive approach. This position is not new; NETLOCK has articulated this expectation consistently in prior interactions. We remain committed to engaging constructively and in good faith, and we believe that mutual respect and balanced discourse are essential to maintaining an effective and healthy ecosystem.

The response from NETLOCK in Comment #11 is extremely concerning.

The response seems to conflate "operational secrets" of other CAs with "public post-mortems". No one is asking NETLOCK to analyze the private internal operations of other CAs, but to analyze the publicly disclosed remediation steps and learnings so that they can ensure their own compliance is in order.
Further, compliance with the BRs and CCADB policies is strictly binary. Factors like labor market constraints and local regulations do not grant NETLOCK an exception from RFC compliance or the IRGs. If the local conditions do not permit NETLOCK from meeting the strict technical and procedural standards required of publicly-trusted CAs, that is simply not relevant here.

NETLOCK, please confirm that:
1.) You will file the separate incident report regarding IRG non-compliance as requested in Comment #10.
2.) You acknowledge that "labor market constraints" are an unacceptable justification for technical non-compliance in the WebPKI.

I’d like to clarify Mozilla’s expectations and address some of the important points raised here.

First, there is a difference between attempting to compare the internal operations of other CA operators and reviewing publicly documented incident reports and post-mortems. Mozilla does not expect Netlock to assess the internal processes of other CA operators, but we do expect it to review publicly available incident documentation in Bugzilla. This practice enables all CA operators to identify failure modes, remediations, and lessons learned, and to evaluate them against their own systems and controls.

Second, compliance with the CA/Browser Forum Baseline Requirements, CCADB policies, and Mozilla Root Store Policy is not contextual. These requirements apply uniformly, regardless of jurisdiction. While Mozilla recognizes that CA operators may function under different regulatory and operational conditions, such factors do not excuse or justify technical or procedural non-compliance. If local constraints affect a CA operator’s ability to meet these requirements, Mozilla expects that risk to be clearly identified, mitigated, or escalated.

Finally, to ensure clarity and proper tracking, Mozilla reiterates the requests in Comment #10 and Comment #12 that Netlock file a separate incident report addressing the identified incident-reporting non-compliance, consistent with CCADB incident reporting expectations (i.e. the report should describe the nature of the non-compliance, its root causes, the remediation measures taken, preventive measures implemented, etc.).

Mozilla values constructive, good-faith engagement and expects discussions to remain professional and respectful. At the same time, clear ownership of compliance obligations and demonstrable learning from ecosystem incidents are essential to maintaining trust.

Dear Community Members,

we would like to thank the community for its supportive approach and for maintaining a respectful and constructive dialogue.
NETLOCK acknowledges the obligations referenced in the previous comments and fully recognizes the importance of learning from past incidents, including leveraging the experiences and lessons learned by other CA operators when identifying issues and implementing corrective actions.
In our earlier comment, we intended solely to explain that, in certain cases, the factors leading to an incident may also stem from differing operational environments. We did not state, nor did we intend to imply, that such differences in operating conditions provide any exemption from the obligation to remediate issues, eliminate root causes, or thoroughly investigate the circumstances that led to an error. Our point was that, in some cases, these conditions may have contributed to additional or distinct failure modes for the affected CAs, and that these factors should not be disregarded if the community aims to achieve a comprehensive understanding of incidents. In our view, the community’s objective is genuine root-cause identification and effective remediation, which requires each CA to examine all relevant causes and conditions, including those that may be unique to a specific operator. We emphasized this only because we believe it is important to explicitly acknowledge that while the applicable community rules and compliance obligations are uniform—and we do not dispute and have never disputed them—operational circumstances may still differ.
We also believe it would be incorrect to isolate a single example, such as labor market constraints or local regulations, and interpret it as justification for deviation from uniform requirements. At the same time, such circumstances must be identified and evaluated as part of a proper root-cause analysis, and appropriate measures must be taken by the CA to ensure effective and lasting remediation. We kindly ask the community to consider our position in this context and with this clarification in mind.
Regarding the issues referenced in Comment #10, as well as Comments #12 and #13, we have opened the corresponding ticket, perform a detailed analysis, and implement the necessary measures to resolve the identified non-compliance.

I am reading the NetLock responses and I am having deep suspicion regarding their origin.

In Bug 1904041, NetLock gave the promise to improve with the "six-eyes" review, yet these eyes did to see the PEM formatting problem, or many other mixups since.

Over two years, we are hearing the same commitments for the improvement, yet the quality of operations and these reports is getting worse.

These reports contain the same empty promises. Over and over. How can you convince the community that these text blocks are not being generated by the LLM or the Chat Bot? The patterns of the language and the lack of the action are seemingly identical. We are wanting to see the technical controls and true improvement, not human promises or the artificial apologies.

crt.sh shows NetLock is trusted in many root certificate stores. I'm unsure if all those root certificate stores participate in the Bugzilla - but I am directing this question them.

Can you explain how the behavior of NetLock is being acceptable for a publicly trusted root? I must wonder, is this same bar being held for all the certificate authorities? If a new certificate authority was to exhibit this low performance - would they be accepted? It is appearing that there is the double standard where the legacy CA is permitted ignore the rules that the new CA must follow and that cannot be acceptable.

Dear Community Members,

we have completed the internal compliance checklist and are continuing with the remaining follow-up activities.
As part of this work, we have updated the Action Items as outlined below.

Updated Action Items

Action Item Kind Corresponding Root Cause(s) Evaluation Criteria Due Date Status
Correct AIA endpoints to serve DER (initial) Corrective Root Cause #1 DER verified by automated test 2026-01-08 Completed
Update internal compliance checklists Prevent Root Cause #2 Checklist approved and in operational use 2026-01-31 Completed
Deploy additional corrective fix Corrective Root Cause #3 All endpoints verified as DER 2026-02-12 Planned
Implement automated AIA validation Prevent Root Cause #2 Monitoring alerts enabled 2026-02-15 Planned
Clarify auditor scope for AIA endpoint checks Prevent Root Cause #2 Audit evidence updated 2026-06-30 Planned

In addition, based on feedback from the community, we have reviewed the related incidents and updated the incident list accordingly.

Related Incidents

Bug Date Description
1884461 2024-03-08 CA certificates not published in DER-encoded format
1914466 2024-08-22 CA certificates not published in DER-encoded format
1914893 2024-08-26 CRL not DER-encoded
1914893 2024-08-26 Amazon Trust Services: CRL not DER-encoded
1938167 2024-12-18 CRL not published in DER-encoded format
2004492 2025-12-05 IdenTrust: CA Certificate not published in DER Encoded Format
2004732 2025-12-08 Certigna: AIA CA issuer field pointing to PEM encoded cert
2004733 2025-12-08 NAVER Cloud Trust Services: CA Certificate not published in DER Encoded Format

We would like to address the concerns raised regarding our recent responses and operational practices.

The previously discussed and questioned six-eyes principle has been formally introduced by NETLOCK and has been continuously applied since its implementation. It is important to clarify, however, that the six-eyes principle can only be effective for measures that are explicitly included within the scope of six-eyes-controlled actions. While we are steadily expanding this scope, it is not yet possible to fully eliminate human involvement from all operational processes. Monitoring and validation configurations are still created and maintained by humans, and therefore remain subject to human error despite layered controls.

We would also like to firmly reject the suggestion that our responses are generated by artificial intelligence, large language models, or chat bots. Our team meets on a weekly basis to review incidents, action items, and community feedback, and our responses are formulated collectively during these discussions. NETLOCK’s compliance and operations teams consist of human professionals who are directly accountable for both the content and the actions described in our reports.

We acknowledge that we have received accusations which, while carefully worded, do not align with the spirit of a constructive and supportive community dialogue. We have repeatedly emphasized that the purpose of the community should be to support compliant operation and continuous improvement by identifying issues and encouraging effective technical controls.

In this context, we believe it is important to recognize and value the efforts made by all parties involved. Eliminating or diminishing the work of others does not contribute to progress—neither in the resolution of this specific issue, nor in improving processes across the ecosystem more broadly. Constructive engagement and mutual respect are essential for achieving meaningful and lasting improvements.

We remain committed to implementing concrete technical measures, strengthening preventive controls, and demonstrating measurable improvement over time. We believe that meaningful progress is best achieved through fact-based discussion and collaborative efforts focused on operational outcomes.

NETLOCK remains engaged with the community and continues to take its responsibilities as a publicly trusted CA seriously.

Dear Community Members,

we are continuing to work on the remaining tasks that are still in progress under the Updated Action Items section.
Once these tasks have been completed, we will provide a follow-up report detailing the work performed and the completed action items.

You need to log in before you can comment on or make changes to this bug.