Please enable javascript in your browser to view this site

Age assurance: An imperfect science and solution

Age assurance engages the familiar trade-off in online safety regulation between protecting children and guarding privacy rights. As appetite grows for restrictions on access to digital services, we examine existing methods, regulatory approaches, and wider debate.

  • As regulators around the world look to restrict content children can access online, the case for improved, accurate age barriers to riskier websites has strengthened. Age assurance represents one, but not the only, solution to improving online safety for young people, while simultaneously allowing their participation in appropriate digital spheres. 

  • While not a new concept, there are growing calls for current regulation banning access to pornographic content, and age-restricted products to be expanded to cover a wider definition of harmful content. In the context of regulation such as the UK’s Online Safety Act and Ireland’s Online Safety Code, content providers are facing increased calls for more careful restrictions on potentially harmful content for minors. 

  • There is currently a lack of global consensus on how to accurately and consistently establish age assurance barriers for underage users, as various regions and platforms pursue disconnected efforts. Of the most common methods of age assurance in use all involve notable advantages and disadvantages.

  • Almost all current verification methods raise significant concerns relating to privacy, cybersecurity, and rights to free speech. Any concerted efforts to develop and standardise age assurance requirements will need to manoeuvre a crowded field of existing regulation, including the EU’s GDPR and DSA, and the United Nations’ Convention on the Rights of the Child.

An increasing number of governments and regulators are considering age-based restrictions on digital services

As an increasing number of countries advance online safety legislation, issues relating to children’s online activity continue to occupy governments and regulators. Age assurance, which represents one tool for protecting children online, is broadly used as an umbrella term to refer to mechanisms which seek to determine the age, or age range, of a user attempting to access an online site, service or content. Governments around the world have mandated (or are working towards mandating) age assurance for various purposes including in the context of online safety, therefore increasing the demand on regulators to manage implementation and accuracy challenges. In the context of findings published by Ofcom in 2022 that showed that a third of children in the UK aged between eight and 17 with a social media profile used a false date of birth to evade restrictions on minimum sign-up age, questions around age assurance methods and their efficacy stand as hurdles to achieving broader public policy aims on child protection. Recognising this, we’re examining the various approaches to online age assurance currently being developed around the world, and the associated concerns relating to privacy, user rights and regulatory approaches.

Under the umbrella of age assurance, methods can be broken down into three primary categories – self-declaration, age estimation, and age verification. Within these categories, tools can broadly be classed as self-declaration, physical documentation, facial age estimation, data scraping, and the use of Digital ID, each of which have varied strengths and weaknesses (see Table 1). The absence of regulatory standardisation at both the national and international levels means a variety of age assurance methods are currently employed by big tech and approved by regulators. Not all online situations require the same level of assurance, with many products and services requiring a combination of age assurance tools in order to meet standards of proportionality. Consequently, attempts to regulate the field have focused on finding the balance between developing effective assurance technologies, and complying with data minimisation principles. 

While self-declaration methods have been discredited, there is not yet consensus on the best assurance tools to replace them

The term ‘age assurance’ refers to several existing and developing processes, with varying levels of specificity, accuracy and related privacy concerns. ‘Age verification’ is used to refer to methods designed to verify the exact age of a user, including through references to existing government documents, while ‘age estimation’ is any method designed to estimate age, often by algorithmic or other AI-powered means such as facial age estimation or data scraping. Often viewed as less effective than both of these sub-categories, ‘self-declaration’ refers to methods through which a user is asked to provide their age or date of birth, without the need for further evidence or substantiation.

As a result of the ease with which self-declaration methods can be evaded by underage users, they are broadly being phased out and replaced with harder identifiers, in line with strengthening regulation on child online safety. The employment of digital ID as one potential solution has recently received a boost, through the EC’s October 2024 announcement of plans to integrate an age assurance app into its new European Digital Identity Wallet (EUDI), which is currently being trialled. Digital IDs have previously been eschewed for age barrier purposes due to difficulties in proving their minimisation of data sharing, but the EC’s privacy-preserving approach could create a new era of online age assurance standardisation. Large platforms generally utilise a variety of age assurance methods, often in conjunction, to mitigate weaknesses of methods used in isolation (see Figure 1). Recent efforts to align a common international approach have been spearheaded by the Office of the Privacy Commissioner of Canada (OPC) and the UK’s Information Commissioner’s Office (ICO), but regulation and adoption remain disjointed.

Challenges have arisen in relation to the accuracy of age assurance methods, and over their inclusivity of users

The use of government documentation, in which a user is required to provide a verified scan of a photo ID (e.g. a passport or driving licence) is a well-tested mechanism in offline life, and offers a high level of age assurance, because the documents have themselves been through a verification process. Consequently, physical ID scans are used widely by large platforms (including Meta, TikTok, Google, Roblox and OnlyFans), often as a first barrier to age-restricted content. Their increasing unpopularity, however, comes from the more personal (and commercially valuable) data captured in the process, the risks involved of a child committing a crime by fraudulently using another person’s ID or a fake document – and, perhaps most importantly, a significant potential for disproportionately disadvantaging users without access to official documentation. Critics of physical ID scans have repeatedly pointed out that mandating these methods could risk making the problem worse, by putting some people off using online tools entirely, consequently harming digital literacy efforts.

Biometric approaches to age assurance, which use facial scanning technologies to estimate the age of a face without recognising or identifying the individual, rely on huge datasets and machine learning models. This method is widely used by major photo- and video-sharing platforms, including by Instagram (which requires some users to submit a video selfie), Google and TikTok - all of which partner with third-party providers to outsource biometric scanning processes. Developers of biometric estimation underline that the methods are inclusive of those who may not be able to present a valid ID document, and can make age assurance an easier and more user-friendly process by only requiring a real-time selfie rather than any additional documentation. GoBubble, a social networking site for children, employs facial analysis technology to conduct age assurance checks, and instantly deletes the provided selfie. Significant concerns about the accuracy of these technologies remain, however, including the significant margin of error these technologies have when processing users who are close to an age boundary, have particularly light or dark skin or are disabled in a way that might affect their appearance. 

One of the most controversial approaches to age assurance is data scraping, which uses powerful AI models to analyse users’ online behaviour and activity to match them with a specific age or age range. This can include scanning information that users choose to share about themselves, such as usernames, hobbies, and in-site preferences, as well as making inferences based on online friends, the language used in interactions, and even what time of day they access a service. While the inferred, AI generated age category is rarely taken as factual by a site – rather, these approaches are usually used in tandem with others, by flagging users who appear likely to have lied about their age – the accuracy of AI age estimation through what is essentially enhanced data surveillance has drawn substantial criticism. Such inference models are far from on the fringe of age assurance approaches – for instance, Meta announced in 2021 that, because it viewed physical ID scanning as unfair and easily evaded, it had been using trained AI to scan users’ messages on its site (and on apps where users had linked their Meta account) in order to ascertain their age.

Methods which seek to integrate existing digital identity schemes into age assurance largely overcome problems of accuracy, and their mounting adoption across European countries suggests digital ID could become the chosen solution of policymakers. By placing the onus on tightly regulated digital ID providers (often private companies like Yoti, but increasingly through official bodies like the EU) to certify the accuracy of age-proving documents, these methods take the pressure off each individual site to handle and verify data. Digital ID wallets can be developed to provide age information to sites requiring assurance, thereby automatically permitting or denying access without the opportunity for false declarations or other forms of evasion. Digital ID methods, however, often rely on users providing scans of physical identification document at the point of creating a digital ID wallet, and therefore fail to evade the access challenges presented by more traditional document scan approaches – even if their re-usability and interoperability offer potential for smoothing the friction of repeated age barriers.

Beyond accuracy, privacy concerns are significant for all methods

Critics of age verification have argued that all of these solutions endanger the personal data of users. Given the context of several high-profile investigations into major tech firms finding the frequent misuse of user data, for targeted advertising and AI training, it is understandable that age assurance methods incorporating the use of facial recognition, biometric data, and the scanning of sensitive personal documents are viewed with some suspicion, particularly when the data of underage users is at stake. The EC’s February 2024 research on age assurance explicitly states that age assurance requirements ‘should not be used by companies as a cover for their aggressive data collection practices’, reflecting the concerns of some that scaled-up frameworks could simply provide tech companies a means to gather further information about their users. Critics also point to the dangers of over-emphasising the ‘accuracy’ of age assurance methods, since frequently, the more accurate and robust a method is, the more likely it is to be intrusive and risk infringing on their privacy and data protection rights.

A major disadvantage of the scanning of hard identifiers like a user’s passport or credit card is the required sharing of other sensitive personal data, often with riskier online sites. Since these documents always contain attributes and data points going far beyond a user’s age like their full name, address and nationality, and sometimes even include their race, and gender, extremely high levels of secure data storage and transparency are required to ensure additional information on a user is not retained for commercial profile-building purposes. Without well-enforced regulation, encapsulating all services requiring documentation scans as a form of age assurance, these methods can disproportionately risk a user’s right to privacy. 

Biometric methods, which do not require any inputted information from a user, can preserve privacy as long as services discard the facial image upon completing the desired process. The UK’s Information Commissioner’s Office (ICO) has issued a set of principles to internet service providers that emphasises the requirement for any excess personal information gathered through age assurance methods to be erased, and stressed that age assurance information should not be held longer than necessary. This represents a significant level of trust afforded to technology companies, many of whose historic actions suggest that they perhaps do not deserve it. Some of the biggest players in commercial age assurance technologies, like Yoti and SuperAwesome, have applied to the US’ Federal Trade Commission (FTC) to have their ‘Privacy-Protective Facial Age Estimation’ technology approved as an acceptable assurance mechanism under the Children’s Online Privacy Protection Act (COPPA), as a potential solution to privacy challenges, but apprehensions remain given the sensitivity of biometric data. Regulators have repeatedly emphasised the need for these methods to pay due consideration to GDPR, with the ICO and Ireland’s DPC both asserting the requirement for any biometric data processing to meet the special conditions of Article 9.

Privacy concerns are also significant towards approaches based on data scraping, particularly in the context of ongoing and high-profile AI large language model (LLM) training. There remains little clarity on how the AI used to build age profiles gathers and stores its data – or whether user consent is taken into account. While data scraping could produce accurate and useful estimations of age, especially when used to flag suspected underage users, the associated privacy risks and potential for unnecessary data processing beyond establishing a user’s age are extremely high. These methods ‘violate the privacy of all users’, in the words of Spain’s Data Protection Agency (AEPD), and go against the principles of data minimisation underpinning EU privacy regulation. 

Digital ID solutions, too, have faced criticism for their associated privacy risks, with the Age Verification Providers Association (AVPA) pushing back against suggestions that the EU could use its EUDI wallet as an age assurance solution. The body contends that allowing potentially harmful sites even limited access to a user’s EUDI wallet could facilitate unnecessary identification of the individual, increasing the risk of major privacy breaches. In the UK, the Department for Science, Innovation and Technology (DSIT) outlined the benefits of digital identity in an October 2024 online blog, particularly in how it can allow users to ‘share only what [personal information] is needed, when it is needed’, but no direct reference was made to its potential in age assurance. Perhaps, therefore, Digital ID does not yet represent the catch-all solution to online safety as some within the EU have suggested - with many wary of the necessity for allowing riskier websites access to such a sensitive, personal e-document.

Age assurance methods have been criticised as restricting fundamental rights

Even if accuracy challenges and privacy concerns are overcome, age-based restrictions on digital services are further complicated by an array of debates relating to fundamental rights. Many critics of age assurance have questioned whether, even if proven to be 100% accurate, these technologies should be viewed as a viable solution to online child safety at all. The ICO’s statutory code of practice on age assurance explicitly stresses the requirement for online services to take into account the rights protected by United Nations’ CRC when establishing age barriers. In particular, the right of children to non-discrimination can be affected through flawed application of age assurance tools, which may unfairly restrict certain groups of child users. The regulator has underlined the need for providers to take legislation like the UK’s 2010 Equality Act into account in order to reduce bias, and to ensure inaccurate age assessments are rectified quickly, to limit the chances of children’s speech rights being infringed.

In addition to the rights of child users, critics have opposed age assurance given the risk of inaccurate technology blocking adult users from accessing open websites, and limiting their ability to be anonymous in their online behaviour - with free speech advocates in the US in particular suggesting this impinges on citizens’ First Amendment rights. In March 2024, US-based NGO Free Speech Coalition, which represents the adult industry, submitted detailed feedback to Ofcom’s age assurance guidelines, strongly criticising the risk of regulation threatening the rights of adults to access online content without providing personal information. 

Outside of Children's Online Privacy Protection Act (COPPA) requirements, little attention had been paid to age verification in the US until Louisiana’s legislature passed a law in 2022 requiring the use of age verification on websites that contain a ‘substantial portion of adult content’. This led to a flurry of copycat legislation around the US, with eight similar bills passing in 2023, generating significant opposition, with restrictions on access to social media and discussion platforms facing legal challenges as illegal restrictions of users’ free speech. These concerns were underlined in September when a judge blocked the Utah Minor Protection in Social Media Act as an infringement on the speech rights of both children who use these platforms and the platforms themselves. The issue of free speech is acknowledged in EU approaches as well, with the EC maintaining that anonymity online can bolster freedom of expression and intellectual privacy. EC guidance has broadly noted that while some erosion of anonymity will be necessary to regulate harmful or illegal content, this should be carried out with respect to data minimisation principles, and an emphasis on proportionality. 

Despite increased attention, age assurance is only one part of efforts to make the online world safer for children

Given the significant challenges posed by each age assurance method, the EC has stressed that it should not be treated as a ‘silver bullet’ for online child protection. According to the EC, commercial investment, time and regulatory attention spent on improving online age assurance methods should be balanced with concurrent efforts on child-friendly design and substantial digital education . Regulators should anticipate that loopholes and circumventions of even the most effective age-checks can be found. More importantly, stringent, even perfect, age assurance methods do not solve the problem of online harms by indiscriminately restricting children from online services. Harms which lie beyond age barriers require continued, careful and coordinated regulation. 

Many ongoing legislative efforts to protect children online, like the UK’s Online Safety Act and the EU’s Digital Services Act, devote significant attention to children’s digital education and adult media literacy as important complements to age assurance in making the online world safer. Ireland’s Coimisiún na Meán revealed its finalised Online Safety Code in October 2024, notably including obligations for video-sharing platforms to implement ‘effective’ age assurance methods, while simultaneously developing other approaches like parental controls, easy-to-use reporting and flagging tools, and tighter bans on the sharing of potentially harmful content. Ofcom’s three-year media literacy strategy, announced in April 2024, and the EC’s online literacy guidelines represent longer-term approaches to online safety that look beyond fallible age assurance blocks. When combined with age-appropriate design codes like the ICO’s, age assurance technologies signify one of several important tools in the arsenal of online child protection frameworks.