Age verification is being built before it is solved | FOMO Daily
10 min read
Age verification is being built before it is solved
Age verification is spreading quickly across the internet, but the current tools still come with serious privacy, security, accuracy, and free speech tradeoffs. The real challenge now is not whether governments want age checks, but whether they can avoid building a more restrictive and data hungry internet in the process.
Online age checks used to feel like a niche argument tied mostly to adult sites and a few policy circles. That is no longer the case. Over the last year or two, age verification has moved into the mainstream of internet governance. In the UK, Ofcom says that from 25 July 2025, sites and apps that allow pornography must have strong age checks in place. In Australia, age restricted social media rules took effect on 10 December 2025, requiring platforms to take reasonable steps to stop under 16s from creating or keeping accounts. In California, a new law called the Digital Age Assurance Act is set to become operative on 1 January 2027 and requires operating system providers to collect age information at setup and pass age bracket signals to apps. That is a major shift in how the internet works. The old web assumed access first and intervention later. The new model assumes verification first and access second. What this really means is that age checks are no longer a side issue. They are becoming part of the architecture of the web itself.
The politics are moving faster than the tools
The Verge piece gets to the heart of the problem. Governments want stronger child protection now, but the technical options still force ugly tradeoffs. That mismatch is driving a lot of bad policy. Politicians are under pressure to act because child safety is one of the few issues that can quickly command public support across party lines. Parents are worried, schools are worried, and regulators have spent years hearing that platforms cannot simply mark their own homework. So the demand for age assurance keeps rising. The problem is that lawmakers are often writing mandates before the privacy preserving infrastructure really exists at scale. The result is a scramble to force companies into a small set of imperfect choices. Platforms can guess a user’s age based on behaviour. They can ask for ID. They can ask for a face scan. They can push the job onto app stores or operating systems. None of those options is clean, and none of them fully resolves the deeper tension between child safety, privacy, free expression, accuracy, and technical feasibility. Yet the laws are arriving anyway.
Guessing age is not the same as knowing it
One of the softer approaches is age inference. Instead of forcing everyone to upload documents or scan their face, platforms try to infer whether someone is likely to be a minor from signals they already hold. The Verge reports that Meta uses AI to place suspected teens into more restrictive Instagram accounts, while Google and YouTube have also used account signals to identify users who may be under 18. On paper, this feels less invasive because the system is not demanding a passport or driver licence up front. But inference is still a form of surveillance, and it is also error prone. It depends on behaviour, context, device history, account age, and patterns that can be fuzzy or misleading. A person can be incorrectly flagged as underage, while an actual teenager can still slip through. That is why age inference often turns into a gateway rather than a solution. If the model is unsure, the user is then pushed into a more invasive check anyway. So the supposedly light touch option often just delays the harder question instead of solving it. The internet ends up watching more people while still asking some of them for even more proof.
Latest
Top Picks
The latest industry news, interviews, technologies, and resources.
Agricultural drones are moving beyond fixed maps and scripted routes into real time AI driven field work. For large farm holdings, that could mean faster deployment, more precise spraying, better data, and a stronger shift toward physical AI in the day to day reality of farming
Once governments or platforms want a firmer answer, personal data starts to pile up. A government ID can be accurate, but it also creates obvious breach risks. A face scan looks more modern, but the privacy problems do not disappear just because the process feels smoother. The Electronic Frontier Foundation has warned that facial age estimation can be discriminatory, with studies showing higher error rates for people of colour and women. The Verge also notes that face based estimation has been tricked and that even on device systems, often presented as a privacy win, are not yet easy to secure or deploy well across older hardware. This is where things change from a child safety discussion into a systems trust discussion. Who gets the data. How long is it kept. Can it be linked across services. What happens after a breach. Can a face scan be repurposed by another system later. Can a teenager or an adult challenge a wrong result. These are not side questions. They are the entire issue. A tool that blocks a child from a harmful service but quietly builds a new layer of biometric dependency is not a clean success story.
The device level fix is not really a fix
Because platform by platform checks are messy, some lawmakers have decided the app store or operating system should handle age verification once and then pass a signal on to everyone else. Supporters say that is more efficient and less repetitive. The California law is a strong example of this model. It requires an accessible interface at account setup for age information, then a real time signal that places users into age brackets for developers that request it. On the surface, that sounds cleaner than every app asking for an ID separately. But moving the checkpoint down into the device layer does not make the underlying problem disappear. It centralises it. Suddenly the operating system becomes part of the identity pipeline, and the age signal becomes a standard input for software access. That has huge consequences for open systems, small developers, and privacy focused alternatives that were never built to become age classification infrastructure. It also changes the relationship between the user and the device. A phone or computer stops being just a tool and starts acting more like a gatekeeper. Once that norm is established, it is hard to believe age will be the last attribute governments or platforms want signalled this way.
Open systems and smaller players get squeezed first
Big platforms can absorb compliance work more easily than the open web can. That is one of the quieter truths in this debate. The Verge notes that open source operating systems and Linux distributions are already grappling with how app store style age laws might apply to them. That matters because policy often gets written with Apple and Google in mind, then spills outward into a much wider technical ecosystem. Rules designed around a few giant companies can end up punishing smaller, decentralised, privacy oriented, or volunteer run projects that do not have the money or structure to implement age assurance pipelines the same way. The same pressure can fall on independent websites, forums, niche services, or new entrants who now face extra compliance and legal uncertainty just to let people read, talk, or download software. What this really means is that age verification can become a market filter as much as a safety measure. The companies with the biggest identity, moderation, and legal teams get stronger. The edges of the internet get weaker. That is a serious cost, even if it rarely appears in the public sales pitch.
Courts are warning that this is still about speech
Another reason this issue is not settled is that age verification is not just a technical compliance question. It is also a constitutional and civil liberties question. The Verge reports that laws aimed at both app stores and platforms have already run into federal court trouble. Reuters separately reported that a federal judge blocked enforcement of a Texas law requiring app stores and developers to verify users’ ages, and later reported that a federal judge blocked Virginia’s law restricting social media use for children, finding likely First Amendment problems. That matters because age gates do not just filter commerce. They can limit access to speech, communities, information, and lawful expression. When the government requires broad age checking before someone can access large parts of the internet, the legal system starts asking hard questions about overbreadth, burden, and whether the state has chosen a narrowly tailored path. The courts are not saying child safety is unimportant. They are saying the method still matters. That warning should be taken seriously, because badly designed online safety laws can easily create new rights problems while failing to solve the old safety ones.
Better privacy ideas exist but they are not yet the norm
There is a more hopeful part of the story, and it sits in the world of privacy preserving credentials. France’s CNIL demonstrated a privacy preserving age verification model that lets someone prove they are old enough without sharing other personally identifiable data. Google has also open sourced zero knowledge proof libraries for age assurance, arguing that this can help developers build privacy enhancing age checks. The Future of Privacy Forum says reusable credentials can store a single verification as a secure token so people do not have to keep re submitting sensitive personal data over and over. That is the direction the industry should want. A person should be able to prove a narrow fact, like being over 18, without exposing their full birth date, their full identity, or a reusable pile of biometric information. But the problem is that these systems are still not the universal standard in everyday use. They remain early, uneven, or limited by interoperability and implementation challenges. So while the better future is visible, most of the internet is still being pushed toward cruder methods in the meantime.
The deeper risk is that we normalise permanent ID checks
The child safety argument is powerful because it starts from a real concern. There is no point pretending otherwise. Harmful content exists. Predatory behaviour exists. Recommendation systems can amplify risk. Young users often do need stronger protection than the internet currently gives them. But the deeper risk is that societies get used to proving identity traits every time they want to access lawful digital spaces. Once that habit becomes normal, the infrastructure tends to expand. A rule built for pornography may later reach social media. A rule built for social media may later drift toward games, app downloads, forums, video sites, or general online services. A signal first framed as age only may become linked to parental status, jurisdiction, device ownership, or other categories. That is why privacy advocates worry even when the policy goal sounds reasonable. The danger is not only one bad system. It is the gradual building of a web where access always begins with a personal checkpoint. That changes the culture of the internet itself. It makes anonymity weaker, casual access harder, and participation more conditional. That may be a far larger transformation than many lawmakers currently admit.
What changes next
The next phase will be defined by whether policymakers accept that age assurance is not a simple box to tick. Better systems will need to minimise data collection, reduce repeated checks, avoid discrimination, work across devices, survive legal scrutiny, and still be understandable to ordinary users. That is a high bar, but it is the right one. The wrong path is to keep locking in rushed mandates and then hope the technology improves later. The better path is to treat privacy, security, usability, and speech as core design requirements from the start. In practical terms, that means more pressure for narrow proof systems, more legal fights over broad mandates, more tension between national rules and global platforms, and more debate over whether app stores, operating systems, or websites should bear the burden. Age verification is clearly not going away. The question now is whether it becomes a carefully limited safeguard or the foundation of a much more permission based internet. Right now, the answer is still unsettled, and that is exactly why this debate matters so much in 2026.
Hyundai’s latest move into robotics and physical AI shows that the next big AI race is shifting from screens into factories, machines, and infrastructure. The company is betting that humanoid robots, software defined factories, and energy systems will become part of one larger industrial stack.