YouTube has started testing AI-based age verification in the United States. This follows a global move to put stricter rules on kids’ online access. However, it may face special problems if used in India.
Starting August 13, the platform will use a machine learning tool for a small group of US users. It’s meant to make sure ‘teens are treated as teens and adults as adults’.
The system checks factors like search history, video types, and account activity to infer a person’s age — even changing what they said their birth date was if needed. In the same way, using a Stake code makes sure only verified users can use betting features meant only for adults.
With YouTube’s AI age check, if someone is marked as a minor, they’ll get extra safety features. This can include limited video suggestions and tools for healthy screen use. People watching without logging in will only get the usual content limits, without custom age checks.
Users who are wrongly marked as minors can appeal by sending a government ID, a credit card, or a selfie. James Beser, YouTube’s product director, said the goal is to keep teens safe while protecting their privacy.
Global shift toward mandatory age checks
YouTube’s test comes as more countries push for stronger age checks online. The UK’s Online Safety Act requires sites to use tools like face scans, ID checks, or credit card checks.
Platforms that don’t follow the rules can face fines of up to £18 million or 10% of their global income. Sites like Reddit, X, and Discord have already started making changes to follow the law.
In Australia, social media will be banned for people under 16 starting December 2025. Platforms like Facebook, Instagram, Snapchat, and YouTube will have to check ages using ID, face scans, or AI. In the US, states like Tennessee and Nebraska have passed or suggested laws that require parents’ permission and age checks for minors on social media.
India’s stricter legal framework
In India, it’s required to confirm parental consent before collecting or using data from users under 18. It also bans ads aimed at minors and tracking their online behaviour.
The law makes exceptions for platforms that can prove they are ‘verifiably safe’. However, experts say this ‘whitelisting’ method could slow down new ideas and products.
Verification is costly. According to Consumer Unity and Trust Society (CUTS) International, ID checks could cost about $176,471 per million users each year. Using DigiLocker tokens may cost around $35,176. In rural areas, many people share devices. This makes it harder for AI to judge behaviour correctly and raises the risk of mistakes.
Discord’s parallel experiments
Other platforms are also trying stricter checks. In April, Discord started testing face scans and photo ID checks in the UK and Australia. These are used when a user sees flagged content or changes sensitive content settings.
In 2024, Ofcom said 12% of UK kids aged 8–17 used Discord. Of those, 73% said the app never asked them to prove their age. In Australia, the eSafety Commissioner found 8% of kids aged 8–12 had accounts, even though Discord’s rule is for ages 13 and up.
The road ahead for India
Experts say India could use its digital public systems to create safe, token-based age checks linked to Aadhaar. The system would only give a yes or no answer and wouldn’t save personal details. Other options include face-based age tools like Yoti or AI.
However, policy experts say that checking ages alone won’t fix the broader range of online dangers. In 2022, India reported 1,823 cybercrimes against children, 32% more than in 2021.
Since India’s laws are already some of the strictest in the world, YouTube’s AI model will likely need big changes and extra safety steps before it can work well in the country. Platforms like Stake India, which already follow strict rules, show that meeting local laws is key to building trust and avoiding expensive problems.