OpenAI is limiting employee access to its most advanced AI algorithms amid rising concerns about corporate espionage and intellectual property theft. The company has implemented stricter data access protocols, enhanced staff vetting, and increased physical security at its data centres.
Known internally as "information tenting", the new policies drastically reduce the number of personnel who can access and discuss sensitive algorithms. For example, during the development of the O1 model, discussions were limited to a select group of vetted team members. OpenAI now uses fingerprint scanners for room access and keeps proprietary technology on isolated, offline computer systems. A "deny-by-default egress policy" prevents unauthorised internet connections. These changes follow claims that Chinese AI firm DeepSeek copied OpenAI's models using 'distillation' techniques.
Microsoft researchers had previously suspected DeepSeek of exfiltrating data through OpenAI's API. OpenAI has also stated they have seen some evidence of distillation, a technique used to improve AI model performance by using outputs from another one. The company now requires developers seeking access to its advanced AI models to verify their identity with a government ID.
Related Articles
AI Academy for Teachers
Read more about AI Academy for Teachers →AI Transforms Engineering Roles
Read more about AI Transforms Engineering Roles →OpenAI Bolsters Data Security
Read more about OpenAI Bolsters Data Security →OpenAI Invests in Talent Growth
Read more about OpenAI Invests in Talent Growth →