Note: Notebook check is a part of Benchling Intelligence. See the Benchling Intelligence page for information on how to enable notebook check and other AI-based features. This feature is in open beta, so it may be enabled via the tenant admin console, though keep in mind that it is a work in progress.
Notebook check uses AI to detect possible issues in any Notebook entry. For example, it can detect:
- Adherence to notebook guidelines, such as issues with formatting and naming conventions.
- Incomplete notebook entries, such as entries with missing data attachments.
- Possible mistakes, such as inconsistencies and unusual numerical values.
The feature is available both when authoring and when reviewing an entry.
Usage
Note: Notebook check will give better results if it has been pre-configured with notebook guidelines. See the configuration section below for details.
In any Notebook entry, to start using the feature, open the Review tab. There are three ways to do this:
- In the AI tools menu in the toolbar, select Check entry.
- In the information panel on the right-hand side, select Check entry.
- Click on the Review tab directly. You may also wish to Split workspace and rearrange the tabs as desired.
After spending a few seconds to check your entry, the system produces a list of possible concerns above the review history:
The checks are performed as soon as you open the Review tab for the first time, and can be re-run manually by clicking the re-check icon. They are purely informational, are not saved, and are not stored as part of the review.
Recommended workflow for entry author
- As you work on your entry, consider opening the check results, and re-running them as necessary.
- For each actionable check result, click the checkbox to mark it as completed. This is not saved, but may be useful to keep track of which items have been addressed.
- If a result was particularly useful or not useful, consider clicking the thumbs-up and thumbs-down icons to give feedback to the Benchling team.
- After you make significant changes to the entry, re-run the checks and review them again.
The Benchling team uses metrics on the "Useful" and "Not useful" buttons to improve the product. Any access to the underlying data is governed by the AI Closed Beta Data Review Policy.
Recommended workflow for entry reviewer
- When reviewing an entry, open the Review tab to view the check results. Note that these may not exactly match the check results that the author may have seen.
- Consider each check result as part of your review.
Configuration
While admin configuration is not required, it allows notebook check to be specialized to the needs of your use case. Without configuration, the feature uses a default set of typical guidelines.
Currently, there is one centralized set of guidelines that applies to all Notebook entries, and you must be a tenant admin to set the configuration. To configure the guidelines:
- Click your profile picture in the bottom-left and select Tenant admin console.
- Click Settings.
- Click Benchling Intelligence.
- In the Notebook check section, under Notebook check guidelines, click the checkbox Use custom guidelines.
- The guidelines will be pre-populated with the default guidelines, and you may customize them however you wish.
- A Settings changed banner will appear. To apply the changes, click Save.
Here is one example set of guidelines:
Section guidelines:
* Every notebook entry should have an introduction stating experimental goals.
* Every notebook entry should have a conclusion with objective observations. This conclusion must NOT contain speculation.
Other best practices:
* All file attachments must have an associated explanation.
* Data file attachments should always be associated with a notebook table of the corresponding data.
* Tables should be given meaningful names rather than the default names Table1, Table2, etc.
Data entry mistakes to check:
* Possible typos involving missing decimal place or added or missing digit.
* Possible missing samples.
* Duplicate numerical values that may be a copy and paste error.
Changes to the guidelines take effect immediately, so you are encouraged to test the guidelines out on an example entry.
Guidance for writing guidelines
- The notebook guidelines do not need to follow any particular format. Bullet points and sections may help organize the guidelines, but are not required.
- In most cases, the guidelines should be written in the same style as when writing guidelines directly for scientists. If your organization or team already has notebook guidelines for scientists, those can often be used directly, at least as a starting point.
- While the AI system has a broad understanding of scientific topics, it may be helpful to define any relevant terminology or provide additional context. The AI system does not have access to your overall system configuration or other data in Benchling beyond the guidelines and the relevant Notebook entry.
- It is often helpful to give specific examples of writing that should be considered acceptable and writing that should be considered an error.
- The AI system can understand exception cases. For example, the guidelines might state that specific missing data is allowed if there is an explanation of why the data is missing.
- In some cases, it is helpful to instruct the AI system to give a specific message to the user when an error is flagged, such as referencing a particular resource or reaching out to a particular person for more information on a policy.
- In addition to setting notebook expectations, the configured guidelines may specify what areas the AI system should focus on and which areas it should ignore. For example, you may instruct the system to ignore spelling errors or to flag them as mistakes.
Risks from AI mistakes
While all AI-based systems may make mistakes, Benchling’s notebook check feature is designed to reduce the risk from AI mistakes, since all results are purely informational and do not modify the contents of your entry. Still, be aware of the following risks:
- The system may produce false positives, which may be misleading. Always treat results with appropriate skepticism, and double-check any concerns before updating your entry.
- The system may overlook legitimate errors in your entry. It is intended to be an additional check rather than a substitute for human review.
Limitations
- The AI system is not guaranteed to give the same results when used multiple times, even when run on exactly the same notebook entry with the same guidelines. For example, this means that the entry author and reviewer may see slightly different check results.
- The notebook guidelines can only be configured once for the entire Benchling tenant, and cannot be configured more specifically for templates, teams, etc.
- The AI system only has access to the current notebook entry and the configured guidelines; it does not have access to any other system configuration.
- The AI system does not have access to previous results or information about the use of any other AI-based features in Benchling.
- Not all details of the notebook entry are visible to the AI system. For example, attachment contents are not considered, linked items are not followed, and some font styles and colors are ignored.
- The results from notebook check are not saved.
- Notebook check may only be run on one entry at a time, not in bulk or in an automated fashion.
Security and privacy
For more information about privacy and security for AI-powered features, see Security and Privacy for Benchling Intelligence.