Home Business Young Indians embrace AI innovation but want more safeguards for children

Young Indians embrace AI innovation but want more safeguards for children

168
0
Young Indians embrace AI innovation but want more safeguards for children
  • Opinion poll ahead of India’s global summit finds concerns about AI-generated abuse images
  • Chair of expert group formed by government says safety and innovation must go hand in hand

Young people in India see artificial intelligence (AI) as powerful and helpful but believe that innovation must be balanced with strong safety rules to protect children.

These are among the findings of a new opinion poll conducted ahead of a major global summit on AI being hosted by the Government of India in February.

The survey of over 400 internet users in India, aged 18 to 24 years old and conducted across the country, found that more see AI as mainly a force for good (38%) than mainly harmful (2%) while a majority (58%) consider it to be both good and harmful.

About two-thirds of young people polled (68%) say it helps them learn new skills and work more efficiently and 56% say it lets them access useful information and opportunities.

However, the poll, commissioned by the Childlight Global Child Safety Institute, found that almost all young people are concerned about the misuse of AI to generate sexually explicit images and videos of children. A total of 94% described such images, sometimes called “deepfakes”, as harmful.

In addition, most young people (nearly 89%) want internet companies and social media organisations to be required to use technology like AI as a force for good to detect and remove harmful content before it spreads.

Childlight, which produces an annual study on the scale of sexual abuse and exploitation of children globally, recently highlighted a 1,325% rise in harmful AI-generated online abuse material in the space of one year.

The issue will be a talking point at the India–AI Impact Summit 2026 held in New Delhi on February 16-20. The summit envisions a future where AI advances humanity, fosters inclusive growth and safeguards our shared planet.

The opinion poll found that young people typically spend a long time every day using social media, messaging or content platforms, with about a quarter (26%) doing so for over four hours. Meanwhile, 39% used them for between two and four hours a day, 29% were online with them for one to two hours and 6% spent less and an hour doing so.

A majority of people (63%) described being online as usually “enjoyable” and 46% called it “helpful” while 29% said it was “stressful”, 21% said it was “overwhelming, 24% called it safe and 10% said it was usually unsafe.

Young females were most likely to call it unsafe (13%) and were least confident that AI is being developed in ways that protect young people, with 19% saying they were not confident compared with 11% of young males.

The opinion poll was commissioned by the international research company Norstat after it was reported that Elon Musk’s AI tool Grok on the X platform, formerly called Twitter, was being used to make sexualised images of children. After facing criticism, the company announced new measures to prevent the practice.

Zoe Lambourne, chief operating officer of Childlight, praised India for its clear commitment to child online safety, and said that balancing AI benefits and innovation with strong safety-by-design would further enhance its regional and global leadership.

She warned that AI-generated child abuse is “real abuse that violates children’s dignity and can cause lasting harm” and supported calls from young people for technology platforms to detect, remove and report harmful AI generated content.

India-based Space2Grow works to improve digital safety and child protection and has been discussing the issue with Childlight, the Indian Government and other partners in official pre-summit talks.

Chitra Iyer, its CEO, said: “As AI systems scale, India has an opportunity to show that innovation and accountability can progress together by embedding child protection into design, regulation and enforcement, and by listening closely to what young people are telling us about their online realities.”

Gaurav Aggarwal, a volunteer with the software think tank iSPIRIT, is chairman of the Expert Engagement Group on Child Safety and AI, formed by India’s Ministry of Electronics and Information Technology.

He said: “This research validates what we have been hearing in our Expert Engagement Group consultations – that safety and innovation must go hand in hand. Our approach combines technical solutions with legal safeguards, addressing both who can access digital spaces and what they experience once inside. The voices of young people must be central to this effort.”

Young people told the polling company they viewed AI as a force for good and bad. “AI is great for the future only if it is used with good intentions,” said one 22-year-old female.

One 21-year-old male said: “AI and the internet should be designed with strong safety, privacy protection and clear rules. Users, especially children, need better education about online risks. And platforms should act quickly against harmful or misleading content.”

An 18-year-old male added: “Both AI and the internet are tools. Their impact depends on how we use them. Staying informed, vigilant, and responsible will help ensure a safer and more beneficial digital future for everyone.”

In official pre-summit talks Childlight and partners agreed that child safety cannot sit with technology companies alone but requires a strengthened protection system involving families, schools, platforms, practitioners and policymakers.

They agreed that safety must be built in from the start of AI design, with governance, oversight and careful thinking about misuse and unintended consequences. They added that children and young people must be part of the design process, balancing innovation with safety – including for girls who face particular risks.

Contact Information

Jason Allardyce
jason.allardyce@ed.ac.uk

Notes to editors

Childlight Global Child Safety Institute is an international organisation that works with global partners, governments and law enforcement organisations to combat child sexual exploitation and abuse. It was established in 2021 as a partnership between Human Dignity Foundation and the University of Edinburgh.

Norstat conducted a poll of 410 internet users in India, aged 18-24 on January 15-19, 2026, including a wide range of household income levels, a mix of students and workers, and split equally between males and females.

Contact: Jason Allardyce, Childlight Head of Communications jason.allardyce@ed.ac.uk

Photos attached are free to use

Childlight is a global child safety data institute, hosted by the University of Edinburgh and University of New South Wales and established by Human Dignity Foundation. It utilises academic research expertise to better understand the nature and prevalence of child sexual exploitation and abuse to help inform policy responses to tackling it. Its purpose is to safeguard children across the world from sexual exploitation and abuse. Its vision is to have child sexual exploitation and abuse (CSEA) recognised as a global health issue that can be prevented and treated. Its mission is to use the power of data to drive sustainable, co-ordinated action to safeguard children across the world; improve CSEA data, quality, integrity and reproducibility; and be recognised as the leading independent authority for global CSEA data. Childlight also draws on decades of law enforcement experience at a senior level. Its multi-disciplinary approach ensures not only the production of high-quality data insights but enables Childlight to help authorities around the world turn data into action to pinpoint and arrest perpetrators and safeguard children.