NG Solution Team
Technology

How are tech companies safeguarding children amidst generative AI advancements?

As AI technology rapidly evolves, tech companies face the pressing responsibility of safeguarding children in the digital age. The rise of generative AI poses significant risks, particularly concerning child safety online. Dr. Rebecca Portnoff from Thorn highlights alarming statistics: a dramatic increase in child sexual abuse material over two decades, frequent solicitation of minors for inappropriate images, and numerous reports of sexual extortion. AI can be misused to create and manipulate harmful content, making it crucial for tech companies to implement protective measures.

Thorn leads the charge with its “Safety By Design” framework, adopted by platforms like Slack, Patreon, and Vimeo, and tech giants like OpenAI. This framework emphasizes three principles: developing AI models with a focus on child safety, deploying them only after rigorous safety evaluations, and maintaining ongoing vigilance against emerging threats. The rapid pace of technological advancement demands that companies integrate safety measures directly into their AI models to prevent misuse by bad actors.

While Thorn’s guidelines primarily target developers, they also offer resources for parents, including conversation starters and safety tips. As the AI Conference continues, the focus remains on creating a safer digital environment for children.

Related posts

Why Should Apple Users in Morocco Update Their Devices Immediately?

David Jones

What trends will shape AI and tech in 2026?

Michael Johnson

Is the Parliamentary Committee advancing discussions on the Air Services Authority Bill?

Emily Brown

Leave a Comment

This website uses cookies to improve your experience. We assume you agree, but you can opt out if you wish. Accept More Info

Privacy & Cookies Policy