NG Solution Team
Technology

How are tech companies safeguarding children amidst generative AI advancements?

As AI technology rapidly evolves, tech companies face the pressing responsibility of safeguarding children in the digital age. The rise of generative AI poses significant risks, particularly concerning child safety online. Dr. Rebecca Portnoff from Thorn highlights alarming statistics: a dramatic increase in child sexual abuse material over two decades, frequent solicitation of minors for inappropriate images, and numerous reports of sexual extortion. AI can be misused to create and manipulate harmful content, making it crucial for tech companies to implement protective measures.

Thorn leads the charge with its “Safety By Design” framework, adopted by platforms like Slack, Patreon, and Vimeo, and tech giants like OpenAI. This framework emphasizes three principles: developing AI models with a focus on child safety, deploying them only after rigorous safety evaluations, and maintaining ongoing vigilance against emerging threats. The rapid pace of technological advancement demands that companies integrate safety measures directly into their AI models to prevent misuse by bad actors.

While Thorn’s guidelines primarily target developers, they also offer resources for parents, including conversation starters and safety tips. As the AI Conference continues, the focus remains on creating a safer digital environment for children.

Related posts

Is the One UI 8 update now available for the Galaxy F55?

Emily Brown

Is Windows 10 incorrectly telling you it’s out of support?

David Jones

What Are the Best Alternatives to WikiFX for Forex Broker Reviews?

Emily Brown

Leave a Comment

This website uses cookies to improve your experience. We assume you agree, but you can opt out if you wish. Accept More Info

Privacy & Cookies Policy