Closed Bug 1883779 Opened 4 months ago Closed 1 month ago

VikingCloud: OV Precertificates with incorrect Subject RDN encoding order

Categories

(CA Program :: CA Certificate Compliance, task)

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: andreaholland, Assigned: andreaholland)

Details

(Whiteboard: [ca-compliance] [ov-misissuance])

Attachments

(2 files)

Steps to reproduce:

Incident Report

This is a preliminary report.

Summary

On March 4, 2024, VikingCloud was notified via our Certificate Problem Report mechanism of a potential error in the order of the Subject RDN Attributes from a random sampling examination of OV end-entity precertificates in the certificate transparency logs. After our initial investigation, it was confirmed that the reported OV precertificates were issued with a Subject RDN Attribute order which was reversed. We are still investigating the impact of this issue and a full incident report will be provided.

Assignee: nobody → andreaholland
Status: UNCONFIRMED → ASSIGNED
Type: defect → task
Ever confirmed: true
Whiteboard: [ca-compliance] [ov-misissuance]

Update

The reported certificates were revoked within 5 days of being reported and are listed below.

We are still in the process of investigating the incident and looking for any additional impacted certificates. We will provide a full report once that investigation is complete.

Incident Report

Summary

VikingCloud received a Certificate Problem Report on March 4, 2024 – 13:04 UTC of two potentially misissued OV end-entity precertificates due to the incorrect Subject RDN attribute encoding order. After an initial investigation it was confirmed that the reported OV precertificates had been issued with the attribute encoding in the reverse order.

Further investigation revealed that the OV certificate profile was accurate; however, the order coming from the profile was being overwritten during the last stage before issuance by a software enum list that was reversed in order causing the misissuance of OV certificates.

This incident was not related to any cybersecurity breach or any other security event at VikingCloud or at any of our customers. The security features of the certificate, including its encryption, were not impacted. End-users would not have experienced any issues related to the certificate.

Impact

There were 3,167 OV certificates containing an incorrect Subject RDN attribute encoding order. This was a reversed, OV Subject RDN attribute encoding order which became non-compliant starting on September 15, 2023, due to the effective date of the Certificate Profile Update (SC62) changes to the TLS BRs.

All affected certificates issued during that time will be reissued and revoked to follow the TLS BR requirements. There is an expected delay in revocation which will be addressed with bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1885568.

Timeline (All times are UTC)

2023-09-15 00:00 – Effective date for TLS BR v2.0.0.
2024-03-04 13:04 – CPR notification of potential misissuance due to incorrect Subject RDN attribute encoding order.
2024-03-04 21:18 – Began investigation on the CPR.
2024-03-05 21:42 – Preliminary report on misissuance created.
2024-03-07 19:55 – Potential root cause discovered, solution started, and held OV issuance.
2024-03-09 12:56 – CPR certificates revoked.
2024-03-09 14:22 – Solution implemented in production and OV issuance started.
2024-03-11 14:53 – Confirmation other OV certificates affected.
2024-03-15 16:20 – Preliminary report for delayed revocation created.
2024-03-16 14:53 – 5-day revocation requirement.

Root Cause Analysis

Override of Designed Profile Configuration

  • Reviews Centered Around Profiles – As the certificate profiles ballot (SC62) was being reviewed during its development process, internal focus was on the updated profiles used in the software. Because the various iterations of SC62 ballot reviews each had different elements on which to focus, testing of the final certificates were impacted by the complexity of the multiple iterations and were done via tools rather than using the lowest core level manual validation that would have shown our issues.
  • Code Override of Profiles – As the certificate profile was being used and a precertificate was being created, various pieces of data were placed into the database, one of which was the Subject RDN attributes with the correct encoding order. As this was one of the quick checks for the encoding order of the Subject RDN attributes, it was used in a number of the test cases. At one of the last stages on one of the issuance paths, the accurate Subject RDN encoding order was overwritten by means of an enum ordering with the reversed Subject RDN attribute encoding order. This reversed Subject RDN encoding order was different than the database documented order of the attributes. This order reversal happened right before the validated information to be included in the certificate was sent to the CA to create the precertificate, which in turn was carried to the final certificate.
  • Reliance On Tools – With resources creating new dedicated root certificates and beginning root inclusion requests, increasing automation with ACME services, streamlining the backend operations for automation, and the growing list of changes within all the various certificate types, such as the SMIME Baseline Requirements, we became more reliant on tools for testing vs. using core level manual validation that would have shown our issues For example, our tools were showing the correct encoding order in the Subject RDN attributes, but in reality, it was not correctly presenting that information.

Shared Environment of Linting and Evaluation Tools

  • Centralization Breeds Potential Single Point of Failure – Since all teams involved with Digital Certificates had need of tools and linters for use in evaluation and testing of certificates, a central Linting/Testing environment was created to be shared across the groups.
  • Observations of other CA Bugzilla’s Caused Linting Testing – As CAs were reporting non-compliance around the incorrect Subject RDN attribute encoding order, we looked to our shared linters and tools to test and verify our configuration. Since the pkilint was determined to show an error for this situation, our team ran groups of certificates through that shared resource to validate that the issue was not seen in our end-entity certificates. We relied on the linting tool vs, using additional core level manual validation that would have shown our issues.
  • Failure to Point to Latest Version – As this central linting/testing environment containing the spectrum of linters (old and new) would receive updates, sufficient controls were not in place to recognize and alert that the latest version of pkilint was not being used after an update to the software. While investigating our Certificate Problem Report, the failure to update the software properly was found.

Lessons Learned

While we have increased our reliance on linting tools as they have become increasingly detailed and reliable for validation testing, in some cases, we have determined internal tools can display incorrectly. We will now require additional core level manual validation at all levels of validation, such as PKI Engineering, PKI Quality Assurance, and Internal Audit.

What went well

Response to the original Certificate Problem Report was handled in a timely manner.

What didn’t go well

First, the time to investigate the root cause of this reversal was longer than expected. After the root cause investigation, we next had to identify the impacted OV certificates, begin customer communication, reissuance, and revocation.

Reliance on higher level tools to visually check for compliance rather than a core-level evaluation of the certificate introduced additional challenges. Going forward, we will still use all the tools, but also add additional steps to evaluate at core-levels.

The significant changes in the profile ballot and the over two-year period spent drafting that ballot required multiple reviews, changes, discussions, and adds within the CA/B Forum, as well as our own internal reviews. The tracking of the sub-changes and combined changes with internal reviews became very time-involved and risked errors, and at that point, tools were not available to catch any missed items. It would have been helpful to CAs to have a set of linting tests available for the Profile Changes about 6 months prior to their effective date so that misissuances like this could have been avoided.

Where we got lucky

The Certificate Problem Report being sent only a few months after the profile change was enforced saved us from a much larger revocation event. (It could have happened much later.) Also, it alerted us to other issues within our system, i.e., tools not properly updating.

Action Items

Action Item Kind Due Date
Update test plans to require manual core level validation on a sample of certificates on every release cycle in addition to the testing using tools Prevent 2024-04-01
Internal Audit to add manual core level validation on a sample of certificates Detect 2024-05-01
PKI Engineering to run separate instances of Linters Prevent 2024-03-06
Additional controls around the shared Linting instance to verify updates Detect 2024-05-01

Appendix

Details of affected certificates

The list will be attached later in the day.

Hi Andrea,

Thanks for filing this bug.

Updates Requested:

A. The Timeline Section does not describe when the Certificate Problem Report was responded to or when VikingCloud provided a preliminary report on its findings to both the affected Subscribers and the entity that filed the report (BRs Section 4.9.5, copied below).

4.9.5: "Within 24 hours after receiving a Certificate Problem Report, the CA SHALL investigate the facts and circumstances related to a Certificate Problem Report and provide a preliminary report on its findings to both the Subscriber and the entity who filed the Certificate Problem Report. After reviewing the facts and circumstances, the CA SHALL work with the Subscriber and any entity reporting the Certificate Problem Report or other revocation-related notice to establish whether or not the certificate will be revoked, and if so, a date which the CA will revoke the certificate. The period from receipt of the Certificate Problem Report or revocation-related notice to published revocation MUST NOT exceed the time frame set forth in Section 4.9.1.1."

B. The Root Cause Analysis does not clearly present the root cause(s) that resulted in this incident. It generally describes things that went wrong, but it’s not yet clear to me why they went wrong. Can you help the community understand what specifically led to the issues that culminated in this incident? You could try the “5 Whys” methodology observed in 1878106.

C. The Action Items list does not directly address elements of the What didn’t go well list (e.g., there is no action corresponding with improving detection of changes in the BRs). Additionally, providing more detail related to each Action Item, to consider methods for the community to quantify whether each of the described remediation tactics was successful, would be helpful.

Questions:

Q1) Can you explain the delay between being notified of the mis-issued certificates in the problem report, and all others issued by VikingCloud?

Q2) Can you describe how VikingCloud performs linting today (i.e., is it strictly post-issuance linting? If so, can you describe whether pre-issuance linting has been considered?).

Q3) Can you describe how VikingCloud evaluates linting tools to fully comprehend each one's scope, capabilities, and limitations, including as updates are made available - including tools that might not yet be in use today (e.g., pkilint)?

Q4) Can you describe how VikingCloud validates that linting tools are working as expected?

Flags: needinfo?(andreaholland)

Additional Information

A. The Timeline Section does not describe when the Certificate Problem Report was responded to or when VikingCloud provided a preliminary report on its findings to both the affected Subscribers and the entity that filed the report (BRs Section 4.9.5, copied below).

Timeline (All times are UTC)

  • 2024-03-04 21:19 – Email sent to report filer acknowledging receipt and the investigation into CPR.
  • 2024-03-05 15:57 – Followed up with report filer.
  • 2024-03-07 18:14 – Follow up to report filer with this bug.
  • 2024-03-07 19:19 – Notified 2 affected subscribers. A Bugzilla will be reported for the delay.

B. The Root Cause Analysis does not clearly present the root cause(s) that resulted in this incident. It generally describes things that went wrong, but it’s not yet clear to me why they went wrong. Can you help the community understand what specifically led to the issues that culminated in this incident? You could try the “5 Whys” methodology observed in 1878106.

Why was there a problem?

Because the OV precertificates were presenting the incorrect Subject RDN attribute encoding order.

Why were the OV precertificates presenting the incorrect Subject RDN attribute encoding order?

Because the final stage before issuance was presenting the incorrect order.

Why was the final stage presenting the incorrect order?

Because the last certificate generation step included an enum in some issuance paths with that incorrect order.

Why did the enum have the incorrect order?

Because the profile update did not include an update to the enum order.

Why wasn’t the enum order updated?

Because it had matched the older profile and didn’t appear to control the order during the evaluation of the needed BR changes.

C. The Action Items list does not directly address elements of the What didn’t go well list (e.g., there is no action corresponding with improving detection of changes in the BRs). Additionally, providing more detail related to each Action Item, to consider methods for the community to quantify whether each of the described remediation tactics was successful, would be helpful.

Action Items

| Action Item | Kind | Due Date |

| Speak up more in the CAB meetings to request that large ballots be broken down smaller to reduce the risk of errors or missed items. | Prevent | 2024-03-26 |

Q1) Can you explain the delay between being notified of the mis-issued certificates in the problem report, and all others issued by VikingCloud?

Our first step in the process was to investigate and address the issue for the OV precertificates noted in the CPR. After we discovered the root cause of the issue, we worked at providing a stable solution. After suspecting that other certificates may have been impacted, we held issuance until confirmation of that impact. Finally, once a solution was provided into production, we moved on to investigate the various issuance paths of the OV certificates to see if any of the paths were also affected. Once the impact was confirmed for all affected OV certificates on certain issuance paths, we triggered the 5-day timeline for revocation.

Q2) Can you describe how VikingCloud performs linting today (i.e., is it strictly post-issuance linting? If so, can you describe whether pre-issuance linting has been considered?).

Preissuance linting is being performed on all certificates by the software and post linting is being done by Internal Audit on samples of the certificates,

Q3) Can you describe how VikingCloud evaluates linting tools to fully comprehend each one's scope, capabilities, and limitations, including as updates are made available - including tools that might not yet be in use today (e.g., pkilint)?

Linting tools are being evaluated and continuously updated in our central linter/testing environment for use by all team members. Once a linter change is approved, then it is placed on the roadmap and tested for update into our production environment linter. pkilint is on the roadmap to be added to production preissuance linting.

Q4) Can you describe how VikingCloud validates that linting tools are working as expected?

We test the updates to linters in our central linter/testing environment before they are added to the roadmap for deployment to production into our preissuance linter service.

Flags: needinfo?(andreaholland)

Thank you for responding to the original questions.

“Our first step in the process was to investigate and address the issue for the OV precertificates noted in the CPR. After we discovered the root cause of the issue, we worked at providing a stable solution. After suspecting that other certificates may have been impacted, we held issuance until confirmation of that impact. Finally, once a solution was provided into production, we moved on to investigate the various issuance paths of the OV certificates to see if any of the paths were also affected. Once the impact was confirmed for all affected OV certificates on certain issuance paths, we triggered the 5-day timeline for revocation.”

The Certificate Problem Report was received by Viking Cloud on March 4, 2024. It described three mis-issued OV certificates that were derived by sampling Certificate Transparency Logs, and that the list should not be considered exhaustive.

Bug 1885568 separately describes that other OV certificates were confirmed affected by the issue in this report on March 11, 2024.

The difference between these dates is 7 days.

Question #1: Upon receiving the CPR and determining mis-issuance took place “(2024-03-05 21:42 – Preliminary report on misissuance created.)” as disclosed in the Timeline section, why wasn’t it readily apparent to Viking Cloud that all OV certificates issued after September 15, 2023 were equally affected, especially when considering the volume of issuance during this period (which I understand was about 3,000 certificates)? The Timeline suggests that the full OV certificate impact was not determined until 2024-03-11 14:53.

Question #2: It’s stated that "Finally, once a solution was provided into production, we moved on to investigate the various issuance paths of the OV certificates to see if any of the paths were also affected.” Does this imply that it was known to Viking Cloud that all issuance profiles were misconfigured, otherwise any solution deployed to production at this time would have been considered incomplete?

Question #3: How does Viking Cloud intend to enhance its existing Incident Response capabilities to be able to more quickly evaluate the scope and impact of future incidents? It seems as if Viking Cloud should have been able to assess the full scope of this incident well before the timeline described in this incident report.

Question #4: Would the order of Incident Response operations have changed if the incident instead had a more direct implication on subscriber or relying party security (i.e., would Viking Cloud have waited until they determined a “stable solution" until identifying the set of customers affected by the incident if it, for example, suggested private keys were vulnerable?

Flags: needinfo?(andreaholland)

Ryan, Thank you for your questions. The responses are going through internal review and will be responded to within a week.

Update:

After further investigation into the Subject RDN Attribute ordering for OV certificates, it was found that a small number of EV certificates shared the same mis-ordering; a total of 46 EV certificates were impacted. Core level manual validation revealed that the EV specific additional information also was mis-ordered.

Timeline (All times are UTC)

2024-04-01 15:15 – Impact of EV certificates and EV issuance stopped.
2024-04-01 21:50 – Started notification to affected customers.
2024-04-01 20:45 – Solution for EV certificates in production.
2024-04-01 21:35 – EV issuance resumes.
2024-04-06 15:15 – 5-day revocation requirement.

Action Items Updates

Action Item Kind Due Date
Update test plans to require manual core level validation on a sample of certificates on every release cycle in addition to the testing using tools Prevent Complete
Internal Audit to add manual core level validation on a sample of certificates Detect 2024-05-01
PKI Engineering to run separate instances of Linters Prevent Complete
Additional controls around the shared Linting instance to verify updates Detect 2024-05-01
Speak up more in the CAB meetings to request that large ballots be broken down smaller to reduce the risk of errors or missed items. Prevent Continuous

Appendix

Details of affected certificates

Will be attached tomorrow.

Flags: needinfo?(andreaholland)

Question #1: Upon receiving the CPR and determining mis-issuance took place “(2024-03-05 21:42 – Preliminary report on misissuance created.)” as disclosed in the Timeline section, why wasn’t it readily apparent to Viking Cloud that all OV certificates issued after September 15, 2023 were equally affected, especially when considering the volume of issuance during this period (which I understand was about 3,000 certificates)? The Timeline suggests that the full OV certificate impact was not determined until 2024-03-11 14:53.

Our initial focus was to determine where in the software code the CPR affected certificates were having their correct OV profile Subject RDN ordering rewritten in the reverse order. In our system there are various issuance paths (referring to how the certificate is requested and issued) that could be impacted. After solving the issue with the CPR affected certificates, we went back to investigate the remaining issuance paths to see if they were impacted and had the same root cause. Once that was completed, we were able to determine the impacted OV certificates in those issuance paths. In the future, we will improve our process by scanning all the issued certificates regardless of issuance path first to determine the impact; we will then work to locate and address the root cause for each issuance path. However, if this had been a security issue, we would have determined all impacted certificates, reissued another type of certificate, and revoked the impacted certificates.

Question #2: It’s stated that "Finally, once a solution was provided into production, we moved on to investigate the various issuance paths of the OV certificates to see if any of the paths were also affected.” Does this imply that it was known to Viking Cloud that all issuance profiles were misconfigured, otherwise any solution deployed to production at this time would have been considered incomplete?

Our system has several issuance paths depending on how the certificate is requested and the process by which the certificate is issued. For instance, the ACME requested certificates were not impacted by the defect since the issuance code in the ACME OV issuance path was not overwriting the correct OV profile. So not all issuance paths resulted in the correct Subject RDN order being overwritten, and that is the reason we investigated the respective issuance paths. After each of the issuance paths were checked, in order to uncover which certificates and customers were impacted, we had to either determine how the OV certificate request was submitted or we had to scan the entire set of issued OV certificates. Unfortunately, both options had some manual steps to relate the impacted certificates to a particular customer. Because of our central Linting/Testing environment update issue, we created the impacted OV certificates list by determining the certificate request method utilized. We have determined that this process of investigation delayed our ability to retrieve the full list of impacted certificates and customers. A more efficient process for future incidents will be to scan all the issued certificates regardless of issuance path.

Question #3: How does Viking Cloud intend to enhance its existing Incident Response capabilities to be able to more quickly evaluate the scope and impact of future incidents? It seems as if Viking Cloud should have been able to assess the full scope of this incident well before the timeline described in this incident report.

The initial scope was responding to the CPR directly as we have 24 hours to address those certificates listed. The plan for any future incidents of this type would be to scan all the issued certificates to create the impacted list of certificates and then investigate the solution for each issuance path afterward. Additionally, having automation in place to lint our entire issued certificate population at any point in time so that any impacted certificates could be discovered quickly would improve our incident response capabilities.

Question #4: Would the order of Incident Response operations have changed if the incident instead had a more direct implication on subscriber or relying party security (i.e., would Viking Cloud have waited until they determined a “stable solution" until identifying the set of customers affected by the incident if it, for example, suggested private keys were vulnerable?

If the incident was determined to be a security impact the focus would immediately go to the security of the customers and end-users who rely on the certificate. The process would be to determine the impacted certificates and reach out to the impacted customers to notify them of the security impact and present a solution or alternative, if a solution is not available. Then revoke the impacted certificate using end-to-end automation without customer confirmation of installation of the replacement certificate to protect both the end-users and customers who rely on the certificate. As the impact of this incident was not security based the focus started with investigating where the Subject RDN order was being reversed for the CPR reported certificates as the OV profile was showing the correct Subject RDN order.

We are monitoring this bug for any further questions or comments.

My main concern here is that it effectively took you around a month to recognize the full impact of this incident (e.g. the fact that it impacted EV certs as well).

I think that a new incident report should be created here as its raised new questions.

E.g.

  • What is the root cause of this massive delay in finding the full extent of the issue?
  • What are the action items to speed up incident response in the future?

Additionally, having automation in place to lint our entire issued certificate population at any point in time so that any impacted certificates could be discovered quickly would improve our incident response capabilities.

I do not see an action item for this.

As the impact of this incident was not security based the focus started with investigating where the Subject RDN order was being reversed for the CPR reported certificates as the OV profile was showing the correct Subject RDN order.

It is not really for a CA to determine if a misissunace has a security impact or not. Yes there are obvious ones such as key compromise, but there are also non-obvious security implications of misissuances.

For example, what I've seen other CAs do is effectively determine "is there a risk of damage to a human by revoking this cert?" as their criteria. And that criteria applying to individual certs, not a situation where all the certs get that exemption.

Preissuance linting is being performed on all certificates by the software and post linting is being done by Internal Audit on samples of the certificates,

Why are you not doing post issuance linting on everything? Lints are effectively free to run?

or we had to scan the entire set of issued OV certificates.

Please walk me through why this is difficult. Here's how I'd accomplish this:

  1. Dump (probably from SQL) DER encoded certs that have been issued.
  2. Bash script to loop through all these certs and pass them to a linter.
  3. Record the results.

Realistically, this is not more than an hour or two of work, especially since your issued cert count is low. What is different in your process here that makes this much more difficult?

Our focus during this incident was on the OV certificates and the revocation delay. We realize that linting the entire list of issued certificates would have provided the full list of all impacted certificates and that is our plan for any future incidents. To improve our incident response time, we will be setting up automation to enable us to lint our entire issued certificate population at any point in time. Part of that action item is to lint the entire issued certificate population then feedback the list of affected certificates and flag those certificates in our customer portal to notify the impacted customer of their affected certificate(s). This automation was not available during this misissuance incident. This will enable us to speed up the discovery process of the impacted certificates and customer communication.

Preissuance linting is performed to catch potential issues before the issuance of the final certificates as a preventative measure of misissuance. Post issuance linting is not capable of preventing misissuance only detecting it after it has already occurred. In our issuance process, preissuance linting catches all the issues that post issuance linting would catch. A failure during the preissuance linting process hard stops the issuance from occurring therefore, we see this as more beneficial than post issuance linting. The post issuance linting is used by our Internal Audit team to validate that the processes for our certificate validations and issuances are occurring correctly.

Action Items Updates

Action Item Kind Due Date
Update test plans to require manual core level validation on a sample of certificates on every release cycle in addition to the testing using tools. Prevent Completed
Internal Audit to add manual core level validation on a sample of certificates. Detect 2024-05-01
PKI Engineering to run separate instances of Linters Prevent Completed
Additional controls around the shared linting instance to verify updates. Detect 2024-05-01
Speak up more in the CAB meetings to request that large ballots be broken down smaller to reduce the risk of errors or missed items. Prevent Continuous
Point in Time automation to lint entire issued certificate population with customer portal notification. Detect 2024-12-11

Action Items Update

Action Item Kind Due Date
Update test plans to require manual core level validation on a sample of certificates on every release cycle in addition to the testing using tools. Prevent Completed
Internal Audit to add manual core level validation on a sample of certificates. Detect Completed
PKI Engineering to run separate instances of Linters. Prevent Completed
Additional controls around the shared Linting instance to verify updates. Detect Completed
Speak up more in the CAB meetings to request that large ballots be broken down smaller to reduce the risk of errors or missed items. Prevent Continuous
Point in Time automation to lint entire issued certificate population with customer portal notification. Detect 2024-12-11

If there are no further comments, please set Next Update to 2024-12-11.

2024-12-11 is an extremely late deadline for being able to lint your own issued certs.

What makes this task so complicated that it will take you multiple months to deliver on this?

Flags: needinfo?(bwilson)

The action item, Point in Time automation to lint entire issued certificate population with customer portal notification, requires custom development for a large number of customer portal improvements to enable rapid customer review of impacted certificate and methods for quicker customer response, reissuance, and revocation.

To be clear the issue is the customer portal notification part? What part of this needs to be shown to the customer directly instead of flagged for an internal team to inspect? In the future there may be an imperfect linting test incorrectly flagging certificates. Ultimately what information would be presented to the customer that would be useful for them to know?

I fear you're trying to chase perfection on customer notification over getting your linting practices active as soon as possible. I'm still quite unclear on the precise problems that were happening. It sounds like the initial issue was multiple linters being used internally with no version control, but that isn't the priority. Instead it seems to be making sure Profile Changes have 6 months notice on any potential changes?

Flags: needinfo?(andreaholland)

I agree with Wayne here. This feature does not need to be shown to the subscriber to get the action item out of the way.

Fair point, I removed the portion in the action item related to customer notifications to solely address linting the entire issued certificate population which has already been completed.

Action Item Kind Due Date
Update test plans to require manual core level validation on a sample of certificates on every release cycle in addition to the testing using tools. Prevent Completed
Internal Audit to add manual core level validation on a sample of certificates. Detect Completed
PKI Engineering to run separate instances of Linters Prevent Completed
Additional controls around the shared linting instance to verify updates. Detect Completed
Speak up more in the CAB meetings to request that large ballots be broken down smaller to reduce the risk of errors or missed items. Prevent Continuous
Point in Time automation to lint entire issued certificate population Detect Completed
Flags: needinfo?(andreaholland)

We are monitoring this bug for any further questions or comments.

If there are no further questions or comments, I request this bug be closed.

I'll consider closing this on or about Wed. 2024-06-05, unless there are questions or items to discuss.

Status: ASSIGNED → RESOLVED
Closed: 1 month ago
Flags: needinfo?(bwilson)
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: