Anthropic will now train its AI models using user data, encompassing chat transcripts and coding sessions. This marks a shift in their data practices, extending data retention for active users to five years. Users who prefer not to have their data used for training have the option to opt out.
This change means user interactions will directly contribute to refining Anthropic's AI. The extended retention period allows for a more comprehensive dataset, potentially improving model accuracy and contextual understanding. However, users concerned about privacy have the power to prevent their data from being used.
Anthropic has stated that they do not actively collect personal data to train their models. They also employ techniques such as 'Constitutional AI' to minimise the need for user data collection. The company also says that user data will not be used to contact people, build profiles, or sell information to third parties.
Subscribe for Weekly Updates
Stay ahead with our weekly AI and tech briefings, delivered every Tuesday.




