“Eugh,” we mentioned, again in 2025 when Discord started “an experiment” during which some customers had been requested to scan their faces or IDs to be able to confirm that they had been sufficiently old to entry delicate content material. However experiments, typically talking, are assessments, and assessments, typically talking, are one thing you do in pursuit of a specific objective, and now right here we’re not fairly a yr later and Discord is making ready to roll out face scanning and ID checks for everybody, all over the place, on a regular basis.
You may nonetheless be capable of use Discord for those who choose to not take part on this specific surveillance nightmare, however starting in early March you may be locked right into a “teen-appropriate expertise” for those who do not. Meaning content material filters, age gates, the lack to talk in “stage channels”—primarily channels that allow a bunch of individuals to converse whereas an viewers listens—and restrictions on direct message and buddy requests.
Associated articles
“Video selfies for facial age estimation by no means depart a consumer’s system,” in the present day’s announcement says, including,”Id paperwork submitted to our vendor companions are deleted shortly—most often, instantly after age affirmation.”
You could recollect it was a type of vendor companions who leaked an estimated 70,000 age verification photographs in October 2025—simply 4 months in the past—together with names, usernames, emails, the final 4 digits of bank cards, and IP addresses: “A fairly complete mess,” as PC Gamer’s Jeremy Laird put it on the time.
Regardless of important public pushback, different international locations and areas are following swimsuit with their very own regime of age checks and definitions of “delicate” content material, which in some circumstances has led to bans and blocks: Australians below 16 are banned from social media, as an illustration—though not from Steam, illustrating the basically arbitrary nature of the entire thing. Within the US, the Children On-line Security Act will lead platforms to over-censor, the EFF warned in 2025, as a result of “the checklist of harms in KOSA’s ‘responsibility of care’ provision is so broad and obscure that no platform will know what to do relating to any given piece of content material.”





