Digital Identities Need More Transparency: A Framework Proposal

We explore the potential of digital identity solutions to enhance privacy through selective disclosure, highlight the risks of verifier abuse, and propose a reasonable disclosure framework to standardize and safeguard data-sharing practices.

Digital Identities Need More Transparency: A Framework Proposal

As we’ve previously discussed on this blog, new digital identity solutions, such as mobile drivers’ licenses, have incredibly exciting promise in terms of upgrading both security and privacy of the identity holder. In this post, I want to highlight yet another area where we, as an industry, still have work to do to make that promise of privacy a reality.

Promise of privacy-preserving disclosure

Digital identity specifications allow for the concept of “selective disclosure,” where ID holders can decide exactly what personal information they want to pass along to someone verifying their digital ID. The classic example is at a US bar with an over 21 age restriction – currently, a person presents a conventional physical driver’s license to prove their age, which also shows the bartender their name, address, and full date of birth. No bouncer needs to know all of that information just to confirm someone is able to responsibly enjoy a tasty beverage. A digital ID with selective disclosure, on the other hand, could allow you to prove you’re over 21 without revealing any other irrelevant or personal information.

There are scores of other examples where this concept of selective disclosure could be of benefit. An insurance card could now prove a person seeking health care has valid coverage, without revealing that person’s employer. An asylum seeker with a work permit form can prove their employability without having to reveal their country of origin. Digital identities can serve just as many purposes as their analog versions, but we benefit from them by reducing (or entirely removing) unnecessary data oversharing and leakage. 

Risks of power imbalance

However, there are also potential downsides to this model. While selective disclosure appears to put the power in the hands of the credential holder to choose what information they want to share, there is potential for abuse by the other entity in the disclosure transaction, namely the verifier.

Every credential data request involves both a verifier – the person seeking information, and an ID holder – the person with the information being sought. The verifier asks to confirm information about the holder, and the holder then “presents,” or sends, the requested digital data with an authenticating signature.

An unethical verifier might demand to see all information associated with a digital credential, not just the data needed for a particular transaction. They might be able to obscure a verification request in a way that an ID holder wouldn’t be made aware of all the data that was being requested or wasn’t given the opportunity to decide whether the request was reasonable prior to granting consent. Particularly given the nascency of this new model of interaction, holders of digital identity credentials will need help from industry practitioners to stay safe.

Reasonable Disclosure Framework

To that end, we see the potential for creating, socializing, and committing to standardized formats for common use cases, which we’re calling a Reasonable Disclosure Framework.

This shared format for data requests would give future users the full power of digital ID technology by ensuring data requests are transparent and that overreaching or deceptive requesters can be flagged. Moreover, since it would be based on an open standard, the framework would enable any organization to release a set of disclosure standards tailored to protect the privacy of their members or constituencies. 

One example might be a filter offered by an organization dedicated to the rights of retired Americans, which could be designed to be vigilant against threats of exploitation against identity scams often run against the elderly. Whereas another filter, created by a computer science professional organization, might opt to give its expert users more personal discretion.

These disclosure filters would be attached to a digital wallet, similar to common browser extensions. A user could install more than one filter, creating a composable and layered approach to disclosure control. 

This could look like a user installing a fairly permissive default disclosure filter, then specific, narrower filters for their individual habits and circumstances: an ecommerce-specific disclosure filter from a trusted eCommerce watchdog, and a medical information filter from a privacy foundation. These organizations would have specialized knowledge in particular niches, meaning they would be best equipped to track, evaluate, and flag excessive or malicious data requesters in that niche.

Informed Data Sharing

Each filter could set a level of disclosure that would be considered acceptable for the particular use case, including the types of data being requested and for what purpose. If a verifier exceeded these acceptable parameters, a user would be presented with a warning message or popup similar to what web browsers now use to warn us away from insecure websites. The message could be something as simple as “This request may lead to misuse of your personal information.”

This would allow an average holder at least a fighting chance to be aware of when and how their data was being collected, and to opt out of transactions that went against a documented standard of reasonable trust.

Enforceable Responsibility

This proposed system could also create accountability for deceptive credential requests and data misuse. Verification requests and usage disclosures would be cryptographically signed by requesters, just the same as digital IDs. Digital wallets would keep full records of these signed data requests and usage disclosures, so users could have detailed and accurate records of what information was asked for and disclosed in response to a specific request. 

Over time, anonymized statistical analysis of signed request records could be used to identify verifiers that mispresented their data needs or usage. One such use case might be an online store that was found to have sold user data to advertisers. Trust organizations that publish reasonable disclosure filters could bring consequences to verifiers who are prone to abuses. This would allow enforcement of acceptable data practices to be more directly driven by entities closer to the specific credential scenarios.

These standards for reasonable sharing, warnings of excessive requests, and punitive measures against deceptive disclosures would be the primary variables set by the diversity of disclosure filters enabled by this proposed framework.

Interoperable Protocols

The most important feature of the digital identity standards currently emerging from industry and government efforts is that they are open and interoperable protocols. That is, they are based on a set of technical and data standards that allow any party to issue, receive, or present a credential. Building off this guiding principle, any organization would be able to build and distribute a disclosure filter.

We hope you see the potential in this proposal to protect consumers and bring reasonable transparency to the fore as digital identities rise in usage and prominence. Please reach out if you have an interest in discussing this concept with us at SpruceID.


About SpruceID: SpruceID is building a future where users control their identity and data across all digital interactions.