In an era where bots, fake accounts and AI-generated content are proliferating online, one company has launched a radical idea: scan a person’s iris to verify they are indeed human. The San-Francisco startup Tools for Humanity has deployed spherical devices called Orbs that capture iris patterns, issue a “proof of human” credential, and aim to stem the tide of impersonation, fraud and synthetic identity.
At first glance the vision reads like science-fiction. But the launch and rollout of the Orb have sparked both excitement and concern. Can scanning someone’s eye truly prove “humanness”? And what are the risks to privacy, identity and data security? In this article we unpack how the Orb works, why it matters in the age of AI, the strengths and vulnerabilities of the system, and what the emergence of biometric human-verification means for our digital future.
The Problem Being Addressed: Humans vs Bots
Online identity has always been messy. Fake accounts, bots, automation and now generative AI make it harder than ever to trust who (or what) is behind a screen. Tools for Humanity argues that traditional verification (phone number, email, CAPTCHA) is no longer sufficient in an AI-driven age. The logic goes: if you can prove the eyeball you’re scanning belongs to one unique human being, you can anchor identity in a way that’s harder for bots to replicate.
The Orb is marketed as a device built to give each person a unique biometric key—an iris code—that links to a digital credential called “World ID.” That credential becomes a signal that you are a verified human inside whatever ecosystem uses it: social networks, games, crypto platforms, marketplaces. The offer: identity systems that keep bots out and humans in.

How the Orb Works: Technology & Process
Device & Capture
The Orb is a metallic sphere installed in physical locations (pop-up booths, retail kiosks, events). A user approaches, looks into the Orb’s aperture, and allows the device to scan the iris pattern of one eye (sometimes both). The system captures the unique texture of the iris—an internal part of the eye that remains stable over a lifetime and is highly distinctive.
The scan is converted into a cryptographic template—a mathematical representation of the iris pattern rather than a raw image. That template is then used to generate a “proof of human” token, combined with a mobile app, and linked to the user’s digital ID. The company claims the biometric image is not stored in a central database; instead only the hashed template resides in on-device or encrypted form, then deleted.
Authentication & Use
Once scanned and verified, the user receives a World ID or equivalent credential. This credential is then usable in applications that need to check that a user is a real human—not a bot or duplicate identity. The system supports either fully-human verification, or “proof-of-personhood” models for online services. Crucially, the Orb’s creators say it is open-source or privacy-first: the biometric operate locally, and participants are rewarded (in some locales) with digital tokens in exchange for scanning.
Strengths of the System
-
High uniqueness and resistance: Iris patterns are widely recognised as among the most distinctive biometrics, harder to spoof than faces or fingerprints.
-
Single verification event: Once scanned, the identity token can be reused in multiple services without repeated biometric capture, reducing friction.
-
Bot mitigation: With more sophisticated bots and AI agents emerging, a biometric “proof of human” creates a barrier that simpler CAPTCHAs cannot match.
-
Global ambition: The company aims for scalable deployment across many geographies, potentially providing a universal human-verification layer.
Key Concerns & Weaknesses
Privacy & Data Security
Even though the company states it deletes raw images and only stores a template, biometric data is fundamentally sensitive. Unlike a password, an iris pattern cannot be changed. If a template or system is compromised, the risk is permanent. Additionally, the linkage of such biometric keys with other digital services raises re-identification risks and tracking concerns.
Consent & Ethical Use
In some regions the Orb sessions offered crypto rewards or incentives to users to submit biometric scans. Critics argue this can influence consent, especially in low-income or tech-novel markets. How transparent the Opt-In campaigns are, and how much users understand this new form of identity capture, remains contested.
Surveillance & Power Asymmetries
The launch of biometric verification at scale introduces questions: does requiring an iris scan to prove you’re human favour certain populations? Could this lead to exclusion, discrimination or coercion? The technology may become a gate-keeper for essential services or domains (games, marketplaces, social access) and thus wield societal power.
Bot/Proxy Work-arounds
No system is perfect. There remains risk of “mule accounts” where a human verifies once then proxies access to bots or multiple identities. Even biometric systems may be subverted if the operational enforcement is weak. Thus, the human-proof layer may offer deterrence but not absolute immunity.

Deployment & Regulatory Landscape
The Orb system is already in pilot and operational modes across various regions. The company reports millions of scans so far and retail-style locations for user enrolments. But regulatory scrutiny has followed: European privacy authorities have challenged operations that collect iris biometrics, and in some jurisdictions the rollout has been paused or limited pending compliance reviews.
Because biometrics are regulated heavily in many jurisdictions (GDPR in the EU, Biometric Information Privacy Acts in U.S. states), large-scale iris-capture programs must manage consent, storage, deletion, data minimisation, cross-border transfers and transparency. The company’s open-source claims and encryption practices are a positive step, but regulators and advocacy groups are watching closely.
Why It Matters & Where It’s Going
For the Online Ecosystem
As platforms struggle with bot armies, fake accounts and synthetic identity misuse, a layer of human-verification anchored in biometrics changes the game. Verified humans may gain trust, preferential access, or reduced friction; services may prune non-verified actors. This could reshape how identity, access and trust are mediated online.
For Biometric Norms
The Orb contributes to a push where biometrics become more than unlocking your phone—they become gates to online identity, reputation and access. That elevates the stakes. Users’ biometric data may underpin broader digital identities. The risk is that once accepted widely, the function creeps into everyday life.
For Tools for Humanity
The startup positions itself as building foundational infrastructure for the “proof of human age.” Their vision spans beyond verifying users for crypto or games—towards universal digital identity, human verification in AI-agent ecosystems, and possibly decentralized access to services. If successful, they may capture a new layer of digital infrastructure.

Should You Trust the Orb? A Balanced View
Yes, from the perspective that the system addresses a real and growing problem: bots, deepfakes, identity fraud and AI-driven impersonation. The technology utilises a biometric modality with high uniqueness, offers a one-time verification that can scale, and is transparent about its open-source and decrypted-data architecture.
No, when we see the broader implications. Trusting an iris-scan means trusting that the company, its operators and its systems handle your biometric data responsibly, operate across jurisdictions ethically, and don’t create exclusion or surveillance risks. Trust also depends on whether the technology truly solves the problem (humans vs bots) without creating new ones (privacy breaches, tracking, digital divide).
For individuals: you should weigh whether you’re comfortable handing over such biometric data, even if the process is anonymised. For businesses: vet the technology’s compliance, auditability and long-term commitment to data protection. For regulators: ensure safeguards around consent, deletion, non-discrimination and cross-border data flows.