At Benchling, our mission is to unlock the power of biotechnology. Our customers count on us to speed up their research, improve data accuracy, boost productivity, and help them quickly adopt new scientific technologies. Our agents features, built on large language model ("LLM") technology, focus on increasing productivity, helping scientists spend less time on manual tasks and more time on their actual science. We are also embedding predictive scientific models directly into scientists’ everyday workflows. Our commitment remains firm: to help scientists achieve more with biotech.
At Benchling, how we protect and secure data is fundamental to the commitments we make to our customers. As we introduce our new AI features and explore new technological frontiers, our core principles of trust, choice, transparency, and reliability remain unchanged. We welcome any questions or feedback as we continue to be your trusted partner in biotech R&D.
LLM model providers
The LLM models that we use for Benchling are provided by Amazon (via Amazon Bedrock), Google Vertex, and OpenAI. Amazon Bedrock is an Amazon Web Services service that offers a variety of models, such as Anthropic Claude, Meta Llama, and Amazon Nova. OpenAI and Google Vertex offer their own suite of LLMs that can be accessed by API. We constantly evaluate the most effective model for each use case and follow the security and data protection principles below.
Key principles for LLM-powered features
- Transparency and consent: At Benchling, we prioritize transparency and are committed to keeping our customer fully informed about when and how AI is being used. We will clearly communicate any use of AI in our services, ensuring that you are aware of how your data is being processed.
- Multi-tenant controls: Benchling's multi-tenant controls are enforced when using LLM-powered features, ensuring that our customers' data remains isolated and logically segregated at all times.
- Retention: Benchling's third-party model providers commit to storing the data temporarily or in-memory only. After this period, all data is required to be securely purged by the third-party providers.
- Data training: The third parties who provide the LLM models we use, Amazon Bedrock, Google Vertex, and OpenAI, are not permitted to use customer data for model training. Additionally, while some of the LLM models offered by these providers may be developed by different third parties, such as Claude (developed by Anthropic) and Llama (developed by Meta), those model developers never process customer data since they are not hosting the models we use.
Data access and use
Customers using the paid offering of Benchling can control whether Benchling can access and use input/output data generated from customer usage of AI features in their tenant admin console under AI Settings > Benchling data access and use. Turning this on gives authorized Benchling employees the ability to view customer inputs/outputs to debug issues, address customer feedback, and evaluate the performance of features. This data is not used for model training, and you may revoke this access at any time.
Committed to keeping your data secure
At Benchling, trust is in our DNA. We recognize that we have a big responsibility ensuring your data is both protected and secured, and we consider this a core commitment to our customers.
Security
Our security program is rooted in:
- Industry standards backing: Our program aligns with the NIST cybersecurity framework and the ISO standard for information security, ensuring holistic investments in security.
- External audits: Our commitment to transparency means that our program undergoes external audits annually, encompassing compliance-based audits (ISO 27001, SOC 2 Type 2), customer audits, and technical system audits.
Incorporating security in product development
Security is also integrated throughout our software development life cycle. We employ:
- Secure coding guidelines and annual mandatory secure coding training
- Threat modeling and architecture reviews
- Automated and manual security checks
- Regular vulnerability scanning, including third-party penetration testing
Corporate security practices
Aligned with ISO 27001 standards, our corporate security practices ensure:
- Strict access controls through our single sign-on ("SSO") provider with mandatory multi-factor authentication for IT services
- Comprehensive onboarding and offboarding processes
- Mandatory annual security awareness training for all Benchling personnel
Certification
Benchling’s security and privacy programs are certified under ISO/IEC 27001:2013 and the Data Privacy Framework, as approved by the Department of Commerce. Benchling also maintains a SOC 2 Type 2 Attestation.
Security documentation
To learn more about Benchling’s security practices, please read our Information Security Policy and our Security Whitepaper.
Privacy and data protection compliance
Benchling is compliant with all applicable data protection laws and regulations, including both the General Data Protection Regulation ("GDPR") and the California Consumer Privacy Act, as amended by the California Privacy Rights Act (CCPA/CPRA). To learn more about Benchling’s privacy practices, please visit our Privacy Page.