AI Reporting Beta

  • Updated

To try the AI Reporting Beta capability, email

Benchling's AI Reporting Beta uses generative AI to write a report that summarizes a set of notebook entries you specify, or all the entries referenced in a study. The report is structured in the style of a report template that you specify, which has instructions on what the AI system should include in each section.

Usage overview

1. Create a report template

A report template is a regular entry template with additional instructions for the AI system. To create a report template, navigate to Template Collections and create a new entry template. Within that template, you may include any number of lines containing directives for the AI system. All such directives must be a single line starting with "[[" and ending with "]]". Here's an example:

[[Write an introduction outlining experimental goals, methodology, and a brief description of findings.]]

When the report is created, these directive lines will be replaced by AI-generated text, and all other lines will be unmodified. Here's an example of a report template:



2. Select the entries to summarize and generate the report

In the project or search listing, switch to the full-screen view if necessary by clicking the right chevron >. Select any number of notebook entries to be summarized, click the More dropdown, and then click Generate report using AI.


Alternatively, when using the Studies product, completed studies will have a Create study report using AI button that creates a report based on all notebook entries in the study.


Select the report template, report name, and folder for the report, then click Create.


The report generation process may take from 30 seconds to 3 minutes, depending on how much information was requested.

3. Review the generated report draft

The generated report draft will be created as a Benchling Notebook entry and will automatically open.

The beginning of the generated report will have a banner explaining that the report was generated using AI and that it should be checked for accuracy. Once you have checked the report for accuracy and made any necessary edits, you may wish to remove this banner.

Unless you specify otherwise in the report template, the entry will contain links to the original entries as applicable.


Available AI models

Benchling currently supports two available AI models:

  • Anthropic Claude is the default model, and is run entirely within Benchling's AWS environment, with no data shared with any additional third parties.
  • OpenAI GPT-4 is also available, but requires an additional written agreement. Please reach out to your Benchling representative for more details.

See OpenAI GPT4 & Anthropic Claude LLM functionality security and privacy for more details.

Limitations and recommendations

Usage limits

If the total size of the entries you specify is too large, you may receive an error message due to exceeding the allowed input size of the AI system. This allowed size varies based on the choice of AI model, and is approximately 200 pages (96,000 words) of text.

Users on a tenant can generate up to 100 reports per month. After the limit is reached, the capability is disabled until the first day of the next month.

Data access

The AI Reporting Beta can only process text information, structured data, and unstructured data notebook entries in the Benchling platform.  It cannot access or interpret attachments, entities, or other files within your tenant.  If your project or research involves data outside the text within your ELN entries, these elements are not captured by the AI Reporting Beta.  Ensure that relevant information is captured as text within the ELN for optimal results.  While the intention is to include all useful information from the entries, some content may be skipped. For example, text formatting is not currently used.

Domain expertise

The AI Reporting Beta is built on OpenAI’s GPT-4 / Anthropic Claude's functionality and has been trained on a broad range of topics, but it does not replace specialized knowledge in niche scientific fields. This means that it may sometimes generalize or make assumptions based on its training data, rather than the specific details of your domain.  The model might not always capture the nuances of your specific scientific discipline.

Risk of inaccurate information or hallucinations

The term “hallucinate” in machine learning refers to a model producing information that might seem accurate but is, in fact, fabricated or not based on the input data. While there are mitigations in place, it is possible the AI Reporting Beta may hallucinate details, especially when dealing with a specialized or unfamiliar topic. Always cross reference the information produced by the model with trusted sources or domain experts.  Ensure rigorous verification processes are in place, especially if the generated content is going to be used for critical decision-making, publication, or other material ways.  

Human review is essential

No matter how advanced a model, the AI Reporting Beta was built using machine learning and may not always capture the intricacies, complexities, and subtleties of human research. Always have a domain expert review the content generated by the AI Reporting Beta, doing a full review for accuracy and relevance. Do not rely on any content generated by the AI Reporting Beta without the review and approval of a human domain expert.

Privacy and security documentation

For information on privacy and security, visit OpenAI LLM functionality security and privacy

Was this article helpful?

Have more questions? Submit a request