At Benchling, our mission is to unlock the power of biotechnology. Our customers count on us to speed up their research, improve data accuracy, boost productivity, and help them quickly adopt new scientific technologies. Built on large language model (LLM) technology, our LLM-powered features make research easier and more efficient, so scientists can spend less time on manual tasks and more time on their actual science. Our commitment remains firm: to help scientists achieve more with biotech.
At Benchling, security and privacy are more than just policies–they're our promises. As we introduce our new LLM-powered features and explore new technological frontiers, our core principles of trust, choice, transparency, and reliability remain unchanged. We welcome any questions or feedback as we continue to be your trusted partner in biotech R&D.
LLM-powered features are opt-in
Benchling recognizes that AI is new and evolving, and we want to give our customers a meaningful choice in enabling LLM-powered features. To ensure you always have control, these features are opt-in. To opt-in, you must either: (i) enroll in Benchling's Beta Program by emailing ai-ml@benchling.com or (ii) reach out to your CS manager.
How are the OpenAI GPT-4 model and the Anthropic Claude model different?
The OpenAI GPT-4 model family and the Anthropic Claude model family are built on very similar technologies, and our testing shows they have comparable performance. Sometimes they perform differently depending on the feature, and we evaluate each Benchling feature against each model to determine the best model choice in each configuration.
Infrastructure-wise, the two models are deployed in very different ways:
- Anthropic Claude Model: Hosted within AWS Bedrock, ensuring data never leaves Benchling's AWS environment. Benchling uses an internal API endpoint for data processing.
- OpenAI GPT-4 Model: Hosted within OpenAI's infrastructure, utilizing Microsoft's cloud services. Benchling uses an external API endpoint to request information from the GPT-4 model, providing context from data stored in the Benchling tenant. The GPT-4 model then provides a response used to take the next steps.
Key security principles for LLM-powered features
- Transparency: At Benchling, we prioritize transparency and are committed to keeping our customer fully informed about when and how AI is being used. We will clearly communicate any use of AI in our services, ensuring that you are aware of how your data is being processed.
- Customer consent: Benchling’s LLM-powered features are provided at our customer's discretion, on an opt-in basis. Additionally, Benchling will not share your data with a third party without your consent.
- Multi-tenant controls: Benchling's multi-tenant controls are enforced when using LLM-powered features, ensuring that our Customers' data remains isolated and logically segregated at all times.
- Zero data retention: LLMs process Benchling data in-memory only. Once processing is complete, all data is purged from memory.
- Zero data training: The third parties that develop our LLM models (OpenAI and Anthropic) possess neither the means nor the permission to train on customer data. In other words, your customer data will never be incorporated into our LLM-powered features, or into an LLM model.
Our commitment to customer trust
At Benchling, trust is in our DNA. Maintaining security, data privacy, and compliance are essential to our commitment to our Customers. We recognize that we have a big responsibility ensuring your data is both protected and secured.
Our security program is rooted in:
- Industry standards backing: Our program aligns with the NIST cybersecurity framework and the ISO standard for information security, ensuring holistic investments in security.
- External audits: Our commitment to transparency means that our program undergoes external audits annually, encompassing compliance-based audits (ISO 27001, SOC 2 Type 2), customer audits, and technical system audits.
Incorporating security in product development
Security is integrated throughout our software development life cycle. We employ:
- Secure coding guidelines and annual mandatory secure coding training
- Threat modeling and architecture reviews
- Automated and manual security checks
- Regular vulnerability scanning, including third-party penetration testing
Maintenance and vulnerability management
Beyond deployment, we invest in regular maintenance. Our vulnerability management tools scan our systems, encompassing our production infrastructure, source code, and corporate assets, at least weekly.
Corporate security practices
Aligned with ISO 27001 standards, our corporate security practices ensure:
- Strict access controls through our SSO provider with mandatory multi-factor authentication for IT services
- Comprehensive onboarding and offboarding processes
- Mandatory annual security awareness training for all Benchling personnel
Certification
Benchling’s security and privacy programs are certified under ISO/IEC 27001:2013 and the Data Privacy Framework, as approved by the Department of Commerce. Benchling also maintains a SOC 2 Type 2 Attestation.
Security documentation
To learn more about Benchling’s security practices, please read our Information Security Policy and our Security Whitepaper.
Regulatory and privacy compliance
Benchling is compliant with all applicable data protection laws and regulations, including both GDPR and CCPA. To learn more about Benchling’s privacy practices, please visit our Privacy Page.
Contractual protections
Benchling has a robust agreement in place with OpenAI, ensuring that OpenAI is held to the highest standards as our subprocessor. Additionally, we have also signed a comprehensive Data Processing Agreement (DPA) with OpenAI, further reinforcing our commitment to safeguarding data and ensuring OpenAI adheres to the stringent privacy and security protocols we demand.