Anthropic Trains on User Data

Anthropic Trains on User Data

28 August 2025

Anthropic will now train its AI models using user data, encompassing chat transcripts and coding sessions. This marks a shift in their data practices, extending data retention for active users to five years. Users who prefer not to have their data used for training have the option to opt out.

This change means user interactions will directly contribute to refining Anthropic's AI. The extended retention period allows for a more comprehensive dataset, potentially improving model accuracy and contextual understanding. However, users concerned about privacy have the power to prevent their data from being used.

Anthropic has stated that they do not actively collect personal data to train their models. They also employ techniques such as 'Constitutional AI' to minimise the need for user data collection. The company also says that user data will not be used to contact people, build profiles, or sell information to third parties.

AI generated content may differ from the original.

Published on 28 August 2025
aianthropicprivacymachinelearningdata
  • AI firms share safety tests

    AI firms share safety tests

    Read more about AI firms share safety tests
  • DeepSeek V3.1 Model Unveiled

    DeepSeek V3.1 Model Unveiled

    Read more about DeepSeek V3.1 Model Unveiled
  • AI Personality Shifts Explained

    AI Personality Shifts Explained

    Read more about AI Personality Shifts Explained
  • Claude's Context Window Expands

    Claude's Context Window Expands

    Read more about Claude's Context Window Expands
Anthropic Trains on User Data