Security and privacy for LLMs and AI-powered features

  • Updated

At Benchling, our mission is to unlock the power of biotechnology. Our customers count on us to speed up their research, improve data accuracy, boost productivity, and help them quickly adopt new scientific technologies. Built on large language model technology, our new AI betas make research easier and more efficient, so scientists can spend less time on manual tasks and more time on their actual science. Our commitment remains firm: to help scientists achieve more with biotech.

At Benchling, security and privacy aren't just policies; they're promises. As we venture into new technological horizons with our new AI betas, our foundational principles around trust remain the same. We invite any questions or feedback as we continue to serve as your trusted partner in biotech R&D.

How are the OpenAI GPT-4 model and the Anthropic Claude model different?

The OpenAI GPT-4 model and the Anthropic Claude model are built on very similar technologies, and our testing so far shows they have similar performance. Sometimes they perform differently depending on the feature. For example, GPT-4 handles search queries a bit better than Claude does.

Infrastructure-wise, the two models are deployed in very different ways. 

OpenAI's GPT-4 model is hosted within OpenAI's infrastructure. Benchling uses an external API endpoint to "request" information from the GPT-4 model, while providing context from the data stored in the Benchling tenant. The GPT-4 model then provides a "response", which we use to take our next steps.

Anthropic's Claude model is hosted within the Benchling AWS environment. Benchling uses an API endpoint, but the API endpoint is internal. The data does not leave Benchling's AWS instance.

For that reason, the default model that is enabled for you is the Anthropic Claude model, deployed within the Benchling AWS environment. We will only enable the GPT-4 model with a customer's explicit consent, since it is a third party receiving data that you are storing in Benchling. 

Key security principles for AI betas

  • Customer consent: Benchling’s AI betas are provided at our customer's discretion. Benchling will not share your data with a third party without your express, written consent.
  • Multi-tenant controls: Benchling's multi-tenant controls are enforced when using LLM functionality in Benchling’s AI betas, ensuring that our Customers' data remains isolated and logically segregated at all times.
  • Zero data retention: LLMs process Benchling data in-memory only. Once processing is complete, all data is purged from memory. 
  • Zero data training: The third parties that develop our LLM models (OpenAI and Anthropic) possess neither the means nor the permission to train on our Customers’ data. In other words, your data will remain untouched and will never be incorporated into our AI betas, or into an LLM model.

Our commitment to customer trust

At Benchling, trust is in our DNA. Maintaining security, data privacy, and compliance are essential to our commitment to our Customers. We recognize that we have a big responsibility ensuring your data is both protected and secured. 

Our security program is rooted in:

  • Industry standards backing: Our program aligns with the NIST cybersecurity framework and the ISO standard for information security, ensuring holistic investments in security.
  • External audits: Our commitment to transparency means that our program undergoes external audits annually, encompassing compliance-based audits (ISO 27001, SOC 2 Type 2), customer audits, and technical system audits.

Incorporating security in product development

Security is integrated throughout our software development life cycle. We employ:

  • Secure coding guidelines and annual mandatory secure coding training
  • Threat modeling and architecture reviews
  • Automated and manual security checks
  • Regular vulnerability scanning, including third-party penetration testing

Maintenance and vulnerability management

Beyond deployment, we invest in regular maintenance. Our vulnerability management tools scan our systems, encompassing our production infrastructure, source code, and corporate assets, at least weekly.

Corporate security practices

Aligned with ISO 27001 standards, our corporate security practices ensure:

  • Strict access controls through our SSO provider with mandatory multi-factor authentication for IT services
  • Comprehensive onboarding and offboarding processes
  • Mandatory annual security awareness training for all Benchling personnel


Benchling’s security and privacy programs are certified under ISO/IEC 27001:2013 and the Data Privacy Framework, as approved by the Department of Commerce.  Benchling also maintains a SOC 2 Type 2 Attestation. 

Security documentation

To learn more about Benchling’s security practices, please read our Information Security Policy and our Security Whitepaper.  

Regulatory and privacy compliance

Benchling is compliant with all applicable data protection laws and regulations, including both GDPR and CCPA. To learn more about Benchling’s privacy practices, please visit our Privacy Page

Contractual protections

Benchling has a robust agreement in place with OpenAI, ensuring that OpenAI is held to the highest standards as our subprocessor. Additionally, we have also signed a comprehensive Data Processing Agreement (DPA) with OpenAI, further reinforcing our commitment to safeguarding data and ensuring OpenAI adheres to the stringent privacy and security protocols we demand.

Was this article helpful?

Have more questions? Submit a request