Confident Security, a startup based in San Francisco, has emerged from stealth mode, securing $4.2 million in funding to advance AI data privacy. The company's flagship product, CONFSEC, ensures end-to-end encryption for AI interactions, preventing user data from being stored, viewed, or used for training by AI vendors or third parties. CONFSEC acts as an intermediary between AI vendors and their customers, addressing privacy concerns in sectors with stringent data regulations, such as healthcare and finance.
CONFSEC is inspired by Apple's Private Cloud Compute (PCC) architecture and incorporates multiple security layers, including anonymisation, advanced encryption, and public transparency. Data is encrypted and routed through services like Cloudflare or Fastly, ensuring servers do not see the original content. Decryption is only permitted under strict conditions, prohibiting data logging and its use for training. The software running AI inference is publicly logged and open to expert review, ensuring verifiable guarantees.
Confident Security aims to foster trust and enable wider adoption of AI technologies across sensitive sectors. CONFSEC allows businesses to wrap their AI inference engines to guarantee that prompts and metadata will never be used in AI training, never seen by any party, and that no person can access unencrypted user data. It is deployable on any cloud or bare-metal environment, giving businesses the ability to technically guarantee privacy.
Related Articles
Meta Patches AI Prompt Leak
Read more about Meta Patches AI Prompt Leak →McDonald's Chatbot Data Breach
Read more about McDonald's Chatbot Data Breach →AI Impersonates Marco Rubio
Read more about AI Impersonates Marco Rubio →OpenAI Restricts Algorithm Access
Read more about OpenAI Restricts Algorithm Access →