I believe that analysis of the attack may fundamentally misunderstand the concern.
Consider, for example, a CSR for sleevi.com asserting it is "Apple, Inc". This would pass your described agreed-upon change, with respect to validating domain, but may lead to confusion or the same human-factor errors that lead to "Some-State" appearing in an issued certificate.
Consider, for example, a further CSR, which introduces an additional extension with added semantics that are not known to Kamu SM. This, too, could potentially bypass the checks, based on your description of the system.
Consider the use of "unusual" (but defined) encoding, such as encoding multiple CNs within the same SET value for the RDN. Care must be taken to ensure that each CN is independently verified, but this is an easy mistake to overlook if all values in the SET are not enumerated.
Finally, consider the encoding of a subjectPublicKeyInfo that incorrectly omits the NULL parameters (or incorrectly includes them). A system that extracts values from the CSR, but does not re-encode, runs a similar risk.
Holistically, any input that is supplied by an external party should be treated as 'hostile', even if they're a customer. A robust system for dealing with this generally involves a system that only allowlists certain fields or values, and re-encodes them explicitly to ensure conformance with the appropriate certificate profile. With respect to fields within the Subject, it is generally highly error prone to have the information supplied by the Applicant, even when convenient, and thus when considering OV/IV/EV, a robust design is to require that an RA agent enter in the information themselves or perform some other, explicit step, which ensures a guaranteed review of each and every individual field within the certificate.
There are many other ways to achieve this, but it highlights some of the principle concerns involved with CSR ingestion. It sounds like the CSR is not directly signed, which is encouraging, but it's worth revisiting the system of controls to ensure that adequate technical controls are in place (to ensure a compliant and properly encoded profile), as well as to ensure a /usable/ interface for RAs to perform the appropriate validation steps. That it's possible for "Some-State" to be encoded highlights a system design flaw, and ways to mitigate that (such as manual entry, or an allow-list of Turkish entities where appropriate, validation of postal codes, etc) are all essential and useful parts.
In light of all of this, it may be worthwhile to revisit the answers to Questions 6 and 7 of the incident report. While restricting the use of OpenSSL defaults is a specific mitigation for this issue, it's not a systemic mitigation for human error while validating, so understanding how those processes are being improved specifically is useful and valuable, as is understanding deeper about how or why the human error was made.
Even human error such as "The RA operator was exhausted, because they'd been working all night" can lead to systemic improvements such as "We're hiring additional RA operators so they can reduce their workload or "increasing pay so they don't have to work two jobs" are valuable to understand (as a hypothetical). Hopefully this approach helps analyze the root cause more thoroughly :)