Hello NIST,

I was extremely impressed with the current draft Digital Identity Guidelines at https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-63-4.ipd.pdf. However, when reading the draft IAM roadmap document at https://www.nist.gov/system/files/documents/2023/05/22/NIST%20IAM%20Roadmap_FINAL_For_Publication.pdf, I became concerned upon reading the unqualified declaration to expand the investigation of biometrics, and was especially surprised to then read the unsourced claim that biometrics are the "most measurable component of the Identity Ecosystem" as rationale for this decision. In short, this biometric component of the roadmap seems to contradict the draft Digital Identity Guidelines in several ways, and the focus on "measurability" seems to sink into a category of fallacy that has repeatedly harmed national security in the past, while also seeming to privilege the short-term fiscal interests of government contractors over the country's ability to build long-term national security.

(I have previously worked at LLNL, but do not currently work there, and my feedback here should not be interpreted to represent that institution.)

My feedback here will be limited to this excerpt of the draft IAM roadmap (emphasis mine):
Expand and Enhance Biometric and Identity Measurement Programs
Expand and enhance efforts to measure, test, and improve the accuracy, usability, and inclusivity
of biometric and identity technologies. To date, biometrics are the most measurable component
of the Identity Ecosystem, with standards, methods, and processes to evaluate their performance.
NIST will continue to enhance our existing face, fingerprint, and iris activities while conducting
foundational research to understand how best to apply metrology to new and emerging identity
technologies and processes.

Unique Risks of Accurate Biometric Data
I will first describe the qualities of biometrics that make them uniquely risky to obtain, retain, and federate among multiple entities. The following analysis draws upon this prompt from the draft Digital Identity Guidelines:
Risk Management
What additional privacy considerations (e.g., revocation of consent, limitations of
use) may be required to account for the use of identity and provisioning APIs that
had not previously been discussed in the guidelines?

In particular, biometrics are unique among authentication mechanisms in that they cannot be replaced if stolen. This makes them an extremely attractive target for foreign adversaries as well as for-profit corporations, because they can be stored indefinitely to track individuals without their or their government's consent. The draft Digital Identity Guidelines directly acknowledge this risk, and explicitly restrict biometrics from use as a single-factor authentication:
4.3.1. Authenticators
...
A biometric also does not constitute a secret and can not be used as a single-factor
authenticator.

Critically, the impact of any breach of biometric data only increases as biometric technology becomes more accurate! The inaccuracy of biometrics on certain populations actually serves to protect those populations from these harms! When considering the "inclusivity" of biometric authentication, the federal government absolutely must consider whether further investing in this technology exposes marginalized groups to greater harm! But even assuming a perfectly inclusive biometric authentication technology, the inability to ever replace biometric data once stolen imposes a uniquely drastic risk which increases in direct relation to the accuracy of the biometric technology. This is why I see the unqualified emphasis on improving its accuracy in the draft IAM roadmap as very surprising and extremely risky without any plans to mitigate this unbounded and indefinite risk.

Historical and Economic Context for Misuse of Biometrics
No technology is developed in a vacuum. While one could argue that live testing of nuclear weapons might improve the technology, I am extremely proud that the US has chosen instead to cement its place as a leader in supercomputing and laser science in order to perfect its ability to simulate nuclear weapons via LLNL and other national labs. NIST constantly updates cryptographic standards when prior standards such as SHA-1 are broken (and usually, long before that occurs). Acknowledging these external factors and blazing a new path instead of clinging to familiar technologies is how the US continues to maintain its edge in national security.

Unfortunately, there are several such external factors which make expansion of biometric authentication a uniquely risky provision, and I fear that the draft IAM roadmap clings to outdated dogma. The draft Digital Identity Guidelines explicitly frames biometrics as part of the "classic paradigm for authentication":
4.3.1. Authenticators
The classic paradigm for authentication systems identifies three factors as the
cornerstones of authentication:
• Something you know (e.g., a password)
• Something you have (e.g., an ID badge or a cryptographic key)
• Something you are (e.g., a fingerprint or other biometric characteristic data)

While "something you are" may be an effective mnemonic, the 21st century US presents multiple risk factors that make the act of clinging to that nursery rhyme into a recipe to undermine much of the goals of the draft Digital Identity Guidelines. I will list two specific issues that I believe are critical to address in the next draft of the IAM roadmap:

In particular, this excerpt from the draft Digital Identity Guidelines highlights some of the risks of entrusting biometric authentication to corporations in the surveillance economy prevalent throughout the current US technology industry:
5.1.4. Impact Analysis
...
Identity Proofing:
• The impact of providing a service to the wrong subject (e.g., an attacker
successfully proofs as someone else).
• The impact of not providing service to an eligible subject due to barriers, including
biases, faced by the subject throughout the process of identity proofing.
• The impact of excessive information collection and retention to support identity
proofing processes.
Authentication:
• The impact of authenticating the wrong subject (e.g., an attacker who compromises
or steals an authenticator).
• The impact of failing to authenticate the correct subject due to barriers, including
biases, faced by the subject in presenting their authenticator.
Federation:
• The impact of the wrong subject successfully accessing an application, system, or
data (e.g., compromising or replaying an assertion).
• The impact of releasing subscriber attributes to the wrong application or system.

The economic incentive for corporate entities to retain high-quality biometric information on US citizens without regulatory controls on its use or distribution is the main reason I question its unqualified expansion in this draft roadmap. I do not question the role of government contractors to provide authentication services for federal agencies in general, but I believe it is critically important to avoid developing NIST cybersecurity standards and other governmental technology which provides outsize economic incentives for private entities to misuse extremely sensitive data in ways that harm national security. It is unfortunate that a very successful US industry is incentivized to amass large datasets of identifiers without strong regulatory incentives to protect it, but I am aware it is not NIST's job to set industrial policy. However, I do believe it is NIST's reponsibility to incorporate a realistic interpretation of the US private corporate surveillance economy into its standards in order to protect US citizens and government employees from foreign adversaries.

Contradiction of Digital Identity Guidelines
The rest of my feedback revolves around the very specific pattern of inconsistency I perceive between the unqualified expansion of biometrics in the draft IAM roadmap and the goals identified in the draft Digital Identity Guidelines.

The draft IAM roadmap states these guiding principles:
1. Enhance privacy and security by integrating confidentiality, integrity, and availability
into our efforts alongside the core privacy engineering objectives of predictability,
manageability, and disassociability.
2. Foster equity and individual choice by exploring the diverse socio-technical impacts of
identity technology and integrating optionality and flexibility into our work products.

Similarly, the draft Digital Identity Guidelines states these goals:
2. Emphasize Optionality and Choice for Consumers: In the interest of promoting
and investigating additional scalable, equitable, and convenient identify verification
options, including those that do and do not leverage face recognition technologies,
this draft expands the list of acceptable identity proofing alternatives to provide
new mechanisms to securely deliver services to individuals with differing means,
motivations, and backgrounds. The revision also emphasizes the need for digital
identity services to support multiple authenticator options to address diverse
consumer needs and secure account recovery.
3. Deter Fraud and Advanced Threats: This draft enhances fraud prevention
measures from the third revision by updating risk and threat models to account
for new attacks, providing new options for phishing resistant authentication, and
introducing requirements to prevent automated attacks against enrollment processes.
It also opens the door to new technology such as mobile driver’s licenses and
verifiable credentials.

Despite the explicit and repeated emphasis on optionality and choice for consumers, with the specific clarification in the draft Digital Identity Guidelines that alternatives should be provided to face recognition, there is no sign of that optionality in the draft IAM roadmap. Notably, the language that specifically identifies alternatives to face recognition technologies is absent from the draft IAM roadmap, which seems rather mysterious given that the rest of the language around individual choice is otherwise extremely similar. I understand that these two documents serve different purposes, and will have separate goals and/or principles. But given the financial incentive I described above for corporations to amass biometric data and to be entrusted by the federal government with that data, it certainly seems like the draft IAM roadmap has been modified to diverge from the goals of the draft Digital Identity Guidelines in order to support more corporate-friendly initiatives which will actually serve to undermine national security due to the unique risks associated with biometric data. The mention of "responsible innovation" in bold text following the draft IAM roadmap's guiding principles seems very hollow and cynical in light of how the actual roadmap diverges so starkly from the lofty goals in the draft Digital Identity Guidelines.

Conclusion
While my feedback has largely revolved around the categorical risk of using biometric data for authentication in any capacity, I want to conclude by highlighting the precise ostensibly-quantitative wording that really gave me pause, bolded in the first quotation at the beginning of this email:
To date, biometrics are the most measurable component
of the Identity Ecosystem, with standards, methods, and processes to evaluate their performance.

There is no citation or further justification for this statement, yet it is the only rationale provided for the unqualified expansion of research into biometrics in the draft IAM roadmap, contrary to the optionality and choice described in the draft Digital Identity Guidelines. Considering that biometrics are the only intrinsically non-deterministic method of authentication, it seems to indicate a very strange rhetorical sleight of hand, whereby cryptographic digital identities and physical credentials (alternative options to biometrics) are dismissed for not being "measurable" enough, when in actuality they simply require significantly less physical/digital infrastructure and government expenditure to evaluate, because they are deterministic and repeatable instead of statistical and probabilistic. I can certainly understand why NIST would identify measurement and evaluation of biometrics as an important research focus, but that's precisely because biometric authentication is currently in the wild west, with corporations compiling their own dossiers on US citizens and selling them to the highest bidder entirely without reference to NIST standards. Seeing this language in the draft IAM roadmap is extremely concerning to me, especially juxtaposed with the surgical removal of the optionality language from the draft Digital Identity Guidelines. This language on biometrics in the draft IAM roadmap seems to set the US down a path to further massive breaches of persistent identifiers which can never be replaced, and which forces the federal government to rely further and further on external vendors to provide unproven and unreliable biometric authentication mechanisms instead of building off of the proven and internationally-recognized NIST cryptographic standards. I fervently hope to see more of the draft Digital Identity Guidelines reflected in future drafts of the IAM roadmap.

I will leave you with a reminder of a classic category of failure, infamous directly because of its relationship to national security (https://en.wikipedia.org/wiki/McNamara_fallacy):
The McNamara fallacy (also known as the quantitative fallacy), named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.

Thank you for your time,
Danny McClanahan