This article is part of SpruceID’s series on the future of digital identity in America. Read the first installment here.
Decentralized identity is not just about technology; it is about values. While cryptography and open standards make it possible to build credentials that are portable, tamper-proof, and verifiable anywhere, the real question is whether those systems will reflect the principles of democratic societies.
The stakes could not be higher. Digital identity, if done well, can empower individuals, streamline government services, and reduce fraud. If done poorly, it risks becoming the most powerful surveillance tool ever created - capable of tracking people’s movements, purchases, and even political activities in real time.
This tension explains why organizations like the ACLU, Electronic Frontier Foundation (EFF), EPIC, and others have sounded alarms about mobile driver’s licenses (mDLs) and related digital ID initiatives. Their critiques are not arguments against digital identity itself, but warnings about how fragile freedom can become if these systems are designed without strict safeguards.
Why Privacy Is at the Core of Identity
Identity is the thread that connects us to the world: we show ID to vote, to fly, to buy certain goods, or to access healthcare. Every one of these interactions creates data. Traditionally, that data lived in fragmented silos—banks kept their records, DMVs kept theirs, healthcare providers kept theirs. Centralized digital ID threatens to collapse those silos into a single feed of data that can be surveilled, monetized, or hacked.
For decades, advocates have cautioned that identity is one of the most sensitive domains for human rights. As the ACLU’s Identity Crisis report notes, identity systems can easily slide into “permission slips for everyday life.” Without robust privacy protections, people may lose the ability to move through society without constant monitoring.
The EFF takes a similar stance, warning that digital ID systems “normalize ID checks” and erode the principle of anonymous participation in public life. A bartender doesn’t need your address. A website doesn’t need your social security number. Yet absent safeguards, digital credentials could funnel all interactions into trackable events.
The Problem With Oversharing
The “No Phone Home” campaign by ACLU, EFF, EPIC, and others highlights one of the most pressing risks: surveillance built into the technical standards themselves. Some mDL implementations allow (or require) the verifier’s system to “phone home” to the issuer each time a credential is presented. This creates a perfect log of behavior: who you are, where you went, what you showed, and when.
Even if governments promise not to abuse such logs, the data becomes an irresistible target for hackers, advertisers, or authoritarian actors. As the coalition of groups stressed, “It’s not enough to promise not to track—we need systems that make tracking impossible.”
This is where selective disclosure and zero-knowledge proofs matter. Instead of handing over the full credential, you can reveal only what’s necessary: “I’m over 21,” or “I live in this district,” without generating a data trail that can be aggregated into a surveillance profile.
Equity Concerns: The Digital Divide
Privacy is not the only concern. Equity matters too. As the EFF noted in “Digital ID Isn’t for Everybody,” digital IDs can unintentionally exclude people. Millions of Americans still lack smartphones, high-speed internet, or the latest devices capable of running secure mDL apps. Others share phones within households or lack the technical literacy to use them.
If physical IDs are phased out or made second-class, marginalized groups - including low-income families, seniors, immigrants, and people in rural areas - could be locked out of essential services.
Utah’s SB 260 offers one model response: it requires that physical IDs remain an option, enshrining choice as a right. This principle is critical. Digital identity should expand access, not restrict it. Any system that forces people into a single technological channel risks deepening inequality.
The Danger of Rushed Standards
Another theme across advocacy critiques is the risk of locking in bad design too early. In 2021, when DHS and TSA began considering rules around mDLs, groups including ACLU, EFF, CDT, and EPIC submitted comments urging caution. Their point was simple: rushing standards before privacy-preserving features were mature could lock in frameworks that enable surveillance and vendor lock-in.
This critique resonates with lessons from history. Early internet standards often prioritized functionality over security, only to be retrofitted later at great cost (e.g., the transition from HTTP to HTTPS). With digital identity, retrofitting is not good enough because by the time people realize their ID system is a surveillance risk, it will be deeply embedded in daily life.
Principles for Protecting Privacy
The ACLU’s legislative guidance outlines twelve essential safeguards - basic design and policy principles that must be embedded into any digital ID system to protect citizens’ rights. These are not optional features; they are the foundation for preserving autonomy and preventing surveillance:
- No Police Access to Phones: Enforcement officers must not be allowed to gain possession of someone’s device during verification. Identity checks must not become de facto searches.
- No “Phone-Home” Tracking: Systems must be capable of functioning offline, without sending usage data back to the issuer. Digital IDs should not create logs of when and where they’re used.
- Granular Control & Selective Disclosure: Users must be able to choose which data fields are shared—such as proving over-21 status without revealing age or birthdate.
- Unlinkability by Verifiers: A credential shouldn’t act as a digital fingerprint. Systems must prevent verifiers from correlating presentations across different contexts or stores.
- Open Ecosystem & Wallet Choice: Identity must remain public infrastructure, not controlled by a single vendor. Wallets should be open-source or interoperable, giving users freedom of choice.
- Verifier Accountability: Users must know who is requesting their info and must have a log of when and what data is shared, under their control.
- Transparency and Public Oversight: Before implementing digital ID systems, agencies should publicly publish technical plans and accept expert and civic feedback.
- No Remote Issuer Kill-Switch: Digital IDs must not allow issuers to revoke or disable credentials remotely without due process—they should ideally function offline.
- Right to Paper: Digital IDs must always be optional. Physical IDs must remain valid and cannot be refused in person or online.
- Limits on ID Demands: Businesses should only request the minimal data necessary and must not make services contingent on credential presentation unless legally required.
- Restrictions on Data Use & Retention: Data collected from digital IDs must be limited to immediate needs; neither verifiers nor wallet providers may retain or share it longer than necessary.
- Enforceability: Private Right of Action: Users must be able to challenge violations in court if their rights are breached.
These principles highlight how privacy requires both legal guardrails and technical design. For example:
- Selective Disclosure must be implemented using digital signatures or zero-knowledge proofs - not optional extras added later.
- No Phone-Home demands that mDLs and digital IDs work fully offline by default - policy defines what technology must prohibit.
- Open Wallets require interoperable standards (e.g. W3C VCs, ISO mdocs) and regulatory mandates to ensure choice, not vendor lock-in.
Technology in Service of Policy
The good news is that the cryptographic tools already exist. Digital signatures make credentials tamper-proof. Selective disclosure prevents oversharing. Zero-knowledge proofs enable compliance without exposure. But technology alone is not enough. Without legal and regulatory frameworks, market incentives often push in the wrong direction - toward surveillance, monetization, and lock-in.
That’s why policy and technology must be designed together. Standards bodies like W3C, ISO, and OpenID Foundation define the technical pieces, while legislatures and regulators must define the guardrails for use. Civil society has a role too: groups like ACLU and EFF keep systems accountable to democratic values.
This collaboration is not optional. As EFF warns, digital identity could easily become “a tool of control, not freedom.” Embedding privacy into both the code and the law is the only way to prevent that outcome.
Utah’s SB 260 enshrines some of these principles into law, banning phone handovers, mandating selective disclosure, and requiring physical ID options. Europe’s eIDAS 2.0 similarly requires voluntary, free, privacy-preserving wallets. The U.S. has an opportunity to learn from both.
Examples of Privacy at Work
Consider a few practical examples:
- Airport Security: A traveler presents a mobile ID at a TSA checkpoint. With privacy-preserving design, the TSA agent verifies the credential locally, without logging the interaction back to the DMV. Only the attributes relevant to travel are shared.
- Online Age Verification: A website selling age-restricted goods requests proof of age. Instead of uploading a driver’s license scan, the user generates a zero-knowledge proof: “I am over 18.” The website never sees the name, address, or ID number.
- Bank Onboarding: A bank must comply with Know Your Customer (KYC) rules. A new customer presents a credential proving they are not on the OFAC list, verified cryptographically without revealing their full personal profile.
In each case, the technology provides the privacy features, but policy ensures they are mandatory rather than optional add-ons.
The Path Forward: Guardrails First, Scale Second
The path forward is clear: privacy must come first. That means adopting guardrails before mass rollout. As civil society groups emphasize, once surveillance capabilities are embedded in technical standards, they are almost impossible to undo.
Instead of treating privacy as a feature to be added later, governments and vendors should bake it into requirements from the start. That means:
- Passing legislation like Utah’s SB 260 in more states.
- Ensuring federal agencies adopt NIST and DHS guidance that reflects privacy-first design.
- Supporting open-source implementations that prove selective disclosure and zero-knowledge proofs in practice.
Digital identity has the potential to become the invisible infrastructure of trust for the 21st century. But without strong safeguards, it could also become an invisible net of surveillance. The difference will be determined by choices made now - about standards, about legislation, and about whether user control is a principle or just a slogan.
SpruceID’s Perspective
At SpruceID, we believe privacy is not negotiable. We’ve contributed to standards bodies, partnered with states like California and Utah, and worked with federal pilots at DHS. In every case, our approach is the same: build technology that enforces privacy by default, and support policy frameworks that make those protections durable.
Our mission is to ensure that digital identity remains something you own, not something that owns you. That means continuing to build cryptographic tools like selective disclosure and zero-knowledge proofs, while also working with policymakers to enshrine principles of voluntariness, data minimization, and non-surveillance into law.
Conclusion: Privacy as the Foundation of Trust
Digital identity is inevitable. The question is not whether it will exist, but what form it will take. Will it be a system that empowers people, or one that monitors them? Will it reduce fraud while preserving liberty, or will it entrench new forms of surveillance?
The answer depends on whether we treat privacy as the foundation of trust. Technology provides the tools, but policy provides the guardrails. Together, they can create a digital identity ecosystem that strengthens democracy rather than undermining it.
As the ACLU puts it, “Digital identity must expand freedom, not restrict it.” That is the challenge before us - and the opportunity.
This article is part of SpruceID’s series on the future of digital identity in America. Read more in the series:
- Foundations of Decentralized Identity
- Digital Identity Policy Momentum
- The Technology of Digital Identity
- Privacy and User Control
- Practical Digital Identity in America
- Enabling U.S. Identity Issuers
- Verifiers at the Point of Use
- Holders and the User Experience
Building digital services that scale take the right foundation.
About SpruceID: SpruceID builds digital trust infrastructure for government. We help states and cities modernize identity, security, and service delivery — from digital wallets and SSO to fraud prevention and workflow optimization. Our standards-based technology and public-sector expertise ensure every project advances a more secure, interoperable, and citizen-centric digital future.
Subscribe to stay up to date with SpruceID