Tech companies are in disagreement over who should be responsible for ensuring children's online safety. Conflicting state laws are emerging, requiring age verification and potentially leading to billions in fines. These laws force online companies to implement costly changes to their platforms. Some argue the bills' age verification requirements would force websites to collect more data, violating user privacy.
Legislation requires companies to mitigate harm to children, including issues like bullying, violence, promotion of suicide, eating disorders, sexual exploitation, and advertisements for illegal products. Social media platforms may need to provide minors with options to protect their information, disable addictive product features, and opt out of personalised algorithmic feeds. Some critics say the proposed measures could threaten users' privacy.
Regulators are increasing scrutiny, demanding algorithms be configured to block harmful material for children. Some platforms are asked to review their efforts to assess risks to children and the practical actions they are taking to keep children safe.
Related Articles
Apple's AI Strategy Rethink
Read more about Apple's AI Strategy Rethink →DeepSeek Faces German Ban
Read more about DeepSeek Faces German Ban →DeepSeek Faces German Scrutiny
Read more about DeepSeek Faces German Scrutiny →UK scrutinises Google's search dominance
Read more about UK scrutinises Google's search dominance →