At Benchling, our mission is to unlock the power of biotechnology. Benchling AI — which includes a suite of AI features, agents, and models embedded directly into Benchling — accelerates scientific progress while upholding the same standards of trust as the rest of the Benchling platform.
This documentation provides an overview of how Benchling protects and secures data when customers use Benchling AI. Our commitment remains firm: to help scientists achieve more, while ensuring that trust, choice, transparency, and reliability remain foundational to everything we build.
Overview of Benchling AI
Benchling offers three categories of AI capabilities: features, agents, and scientific models.
AI features and agents are LLM-powered, leveraging third-party large language model providers to enable productivity-enhancing capabilities such as assistance with research, writing, and data analysis. These AI capabilities send data to third-party model providers for inference and receive generated responses.
Scientific models leverage computational biology models — such as AlphaFold, Chai, and Boltz — to enable predictions like protein structure determination and generate useful outputs like custom protein binders. These models run within Benchling-controlled infrastructure and do not transmit data to third-party model providers.
Core data protection and security principles
Benchling AI is designed around the following data protection and security principles.
Note: some of the controls outlined below vary for users of Benchling for Academics and the trial version of Benchling AI at benchling.ai.
No cross-customer model training
Benchling does not train any of its own models on customer data, except where a customer uses a feature requiring a model to be tailored specifically for a customer’s use. In these limited cases where a model is tailored specifically for a customer’s use — such as with the Experiment Optimization feature, which makes recommendations based on a customer's own data — that customer-tailored model is used exclusively for that customer and is never used for any other customer.
The third-party LLM model providers powering Benchling AI are prohibited from training on customer data. The third-party developed scientific models that customers can run within Benchling are also never trained using customer data.
Contractually required data handling
Data handling by third-party model providers (e.g., Amazon Bedrock, OpenAI) is subject to contractually required controls. Benchling requires all third-party model providers to store data temporarily or in-memory only, securely delete all data after this period, and refrain from training on any data transmitted to them.
Data isolation and authorization
Benchling's multi-tenant isolation controls apply to customers’ use of Benchling AI, with cross-tenant data access being systematically prevented. The use of AI capabilities by a user always aligns to the permissions that the user has been granted in the Benchling platform.
Encryption standards
Data is encrypted in transit and at rest within Benchling's platform. Data transmitted to third-party model providers is also encrypted in transit.
Transparency
We will clearly indicate when you are using an AI Service, ensuring that you are aware of how your data is being processed.
For more detailed information on our data protection and security practices for Benchling AI, as well as our security architecture, please review our AI Security Whitepaper in Benchling’s Security Center.
LLM-powered AI services
Model Providers
Benchling uses the following third-party providers to host the LLM models that power Benchling AI.
Amazon Bedrock provides access to models such as Anthropic Claude, Meta Llama, and Amazon Nova. Amazon Web Services hosts these models and processes data according to Bedrock's data handling policies and the contractual requirements that have been agreed to with Benchling.
Google Vertex AI provides access to models such as Google Gemini and Anthropic Claude. Google Cloud hosts these models and processes data according to Vertex AI's data handling policies and the contractual requirements that have been agreed to with Benchling.
OpenAI provides access to OpenAI's proprietary models. OpenAI also provides the web search capability used in the Public Data feature for Deep Research, when enabled. OpenAI processes data according to Open AI’s data handling policies and the contractual requirements that have been agreed to with Benchling.
It is important to note that certain LLM model developers — such as Anthropic and Meta — do not have direct access to customer data. Benchling accesses their models through hosting providers (e.g., Amazon Bedrock, Google Vertex AI), and those hosting providers are the entities that process the data. These particular model developers themselves never receive or process customer data. For model developers who do have access to customer data, such as OpenAI, Benchling has contractual protections in place to ensure that customer data is retained temporarily and is not used for model training.
Benchling continuously evaluates models for effectiveness across different use cases and may select different models for different features. All selected models and providers are subject to the data handling and security requirements described in this documentation.
Data flow and retention for LLM-powered features
When a user uses an LLM-powered service, Benchling combines the user’s prompt with system instructions and relevant context and transmits this input to the relevant third-party model provider(s). The model provider processes the input using the Benchling-selected model and returns a generated response.
If additional data or inputs are needed to fulfill the request, the data will be retrieved from Benchling data sources (in accordance with the user’s permissions) and then transmitted along with additional inputs to the model provider. The final response is returned to the user based on the entire back and forth between Benchling and the third-party model provider(s).
Input and output data from LLM-powered AI Services is stored within Benchling's core database to support product functionality, such as chat history. This data is subject to the same tenant isolation and data protections as all other customer data. Third-party model providers do not permanently retain data. They store data temporarily or in-memory only, and securely delete it after processing in accordance with contractual requirements.
For more detail on data flow and retention, as well as other data protection and security practices relevant to LLM-powered AI Services and the relevant security architecture, please review our AI Security Whitepaper in Benchling’s Security Center.
Scientific models
Supported models
Benchling provides access to scientific models – some open source and some proprietary – that leverage in silico models to enable predictions (e.g., protein structure determination) and generate outputs (e.g., custom protein binders).
Benchling does not develop these scientific models. Models are obtained from their official sources, subjected to a security review, and structured for execution within Benchling infrastructure.
Security review process
Before any scientific model is made available to users, Benchling performs a security review as part of its Secure Software Development Lifecycle. Model images are hosted in a Benchling-controlled repository in Amazon AWS, and the review and approval process involves multiple Benchling product and security engineers.
Data flow and retention for scientific models
When a user initiates a scientific model run through the Benchling application, Benchling securely retrieves the data needed for the model, checking the user’s permission first. This data is then sent to Benchling-controlled infrastructure, where model execution on the data occurs in ephemeral, isolated containers that are terminated after each run completes. No data is transmitted to external third-party model providers.
For more detail on data flow and retention, as well as other data protection and security practices relevant to our scientific model AI Services and the relevant security architecture, please review our AI Security Whitepaper in Benchling’s Security Center.
Benchling data access and use
Customers using the paid offering of Benchling can control whether Benchling can access input and output data generated from Benchling AI usage. Currently, only LLM-powered inputs and outputs can be accessed by Benchling through its observability service (described below).
The data access and use toggle
The Tenant Admin Console includes an AI Settings page with a “Benchling Data Access and Use” toggle. When enabled, this setting transmits LLM input and output data to Benchling’s self-hosted observability service. Authorized Benchling employees can then access this data to debug issues, address customer feedback, and evaluate the performance of LLM-powered AI features and agents. (This access through the observability service does not extend to the data input and output from scientific model AI Services.)
The “Benchling Data Access and Use” toggle is off by default. Enabling it is entirely optional, and LLM-powered Benchling AI features and agents function normally regardless of this setting. Tenant administrators may enable or disable this setting at any time; disabling it immediately stops new data from being transmitted to the observability service.
Data transmitted to the observability service is retained for 30 days, after which it is permanently deleted. This data is not used for model training and is not shared with third parties. Access to the observability service is restricted to a small group of authorized Benchling engineers and is administered through a centralized, IT-gated process.
Note that certain Benchling offerings — including the academic version and Benchling AI trial experience — may have different default settings for observability data collection.
What data is included
When the Data Access and Use toggle is enabled, the observability service receives the full content of inputs sent to and outputs received from the LLMs powering Benchling AI. This includes user prompts, system prompts, customer data retrieved and transmitted to the model provider, and the model’s responses.
Benchling does not currently transmit scientific model inputs and outputs to the observability service.
Human-in-the-loop design and auditing
AI capabilities in Benchling are designed with human oversight in mind.
Without explicit user confirmation, Benchling AI capabilities can only read data. Any AI capability that can write data, or perform actions, in Benchling requires the user to accept the output or action, and such writes will appear in product audit logs as an action made by the user.
Changes to AI settings in the Tenant Admin Console are also audited, with audit logs made available to download.
Security and compliance programs
Certifications
Benchling’s security and compliance programs are independently audited and align with leading industry frameworks. Benchling maintains ISO/IEC 27001:2013 certification and SOC 2 Type 2 attestation.
To learn more about Benchling’s security practices, please visit the Benchling Trust Center and review both the Information Security Policy and our Security Whitepaper.
Secure development lifecycle
AI capabilities are developed under the same secure software development lifecycle as all Benchling features. This includes secure coding guidelines, mandatory security training, threat modeling, architecture reviews, automated and manual security testing, and regular vulnerability scanning including third-party penetration testing.
Supplier security
Third-party suppliers used for AI capabilities, including model providers, are evaluated as part of Benchling's supplier security program prior to implementation.
Privacy and data protection compliance
Benchling is compliant with applicable data protection laws and regulations, including the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act as amended by the California Privacy Rights Act (CCPA/CPRA). Benchling is also certified under the Data Privacy Framework as approved by the Department of Commerce.
AI service terms
Benchling has published AI Service Terms that articulate Benchling’s approach to governing the use of Benchling AI. These terms address topics including acceptable use, data usage, and intellectual property. The AI Service Terms are available at benchling.com/ai-service-terms.