Meta's internal AI chatbot policies have come under scrutiny after a document revealed several alarming potential behaviours. The policies seemingly permitted AI creations to engage children in 'romantic or sensual' conversations. Furthermore, the AI was allowed to generate false medical information. The document also showed the AI could be used to support racist arguments.
These revelations raise serious questions about the safety and ethical implications of Meta's AI development. The AI bots are designed to initiate conversations and enhance user engagement. Meta's AI Studio allows users to build customised digital personas. Meta has a three-part approach to content enforcement: remove, reduce, and inform. Meta also has guidelines and policies on the use of automated Bots. It is critical to ensure AI systems are not only advanced but also aligned with societal values and safety standards.