Electronic laboratory notebooks (ELNs) were designed to document experiments, but many scientists now see them as little more than digital filing cabinets. In fact, respondents to a recent survey conducted by Sapio Sciences about their experiences with ELNs reported persistent usability and data access challenges.

Rob Brown, PhD
Global Vice President
Head of the Scientific Office
Sapio Sciences
In this Innovation Spotlight, Rob Brown, the global vice president and head of the Scientific Office at Sapio Sciences, dives into the survey results, highlighting the challenges that scientists face along with the opportunities for AI to transform ELNs into more intuitive and capable research platforms.
You recently performed a study where you surveyed scientists about ELN use. What inspired you to conduct this study, and what types of scientists did you speak to?
We wanted to understand how people are using ELNs today—how they feel about them, what’s working, what isn’t, and where the limitations are. The survey included scientists across large pharma and biotech, spanning discovery, development, and manufacturing. It was a broad sample geographically as well, though we primarily surveyed those in the US and Europe.
What aspects of data management and ELN usage are concerning the scientists that you surveyed?
People still struggle with where their data is, and they struggle with having the ELN help them think about what to do next. ELNs are still quite passive. It’s common that a scientist goes into their ELN to find relevant information about what’s happened in a previous experiment, and then that person leaves the ELN to talk to colleagues about what to do next. Then they come back to the ELN to document the next experiment.
With AI, people are demanding something different: they want the ELN to be an active helper in their thought process. They want it to be like a co-scientist that can help them decide what to make and test.
What behaviors are emerging as a result of ELN limitations?
One thing that came out of the survey is that scientists who aren’t being provided AI tools through their organizations are starting to do it for themselves. If you go home and use ChatGPT to plan a vacation or Gemini to write a letter, you know that those tools are powerful. If you come to the lab knowing that those tools exist, but you’re not being given access to anything similar, you might bring those capabilities in on your own, even if it’s better to have the organization provide those tools and put guardrails around the infrastructure.
Things are changing very quickly, and scientists know what the leading edge looks like. If an organization isn’t keeping up with them, the researchers are going to find ways to do it themselves.
Another problem is that some experiments get repeated because there isn’t a good way to find out that they’ve already been done. A research informatics capability should tell scientists if an experiment has been done before. If not that experiment exactly, then something similar, and if there is something to learn from the way one scientist did it versus another. The harder it is to answer those questions, the more likely someone is to say, “Let me just do it again.” Every time a scientist does that, they’ve just spent a thousand dollars or more on something that they didn’t need to do.
What AI can do, if it’s paired with the right informatics infrastructure, is make it much easier to ask those questions. In theory, I can find out whether an experiment has been done before. But if that means asking a data scientist to do the search or building a long, complicated query, that’s a barrier.
If instead I can go into an ELN and simply ask, “Has anybody done this experiment before?” and the system says no—or, “Has anybody done something like this before?” and it points me to relevant examples—then I’m far less likely to repeat work unnecessarily. Sometimes repeating an experiment is still justified, but the difference is that you’re making a conscious decision rather than defaulting to repetition because the information is hard to find.

Next-generation, AI-enabled electronic lab notebooks aim to move beyond static documentation, using natural language interfaces and embedded AI agents to help scientists interpret data and guide next steps.
©iStock, Jacob Wackerhausen
What are scientists looking for in the next generation of ELNs?
Everyone wants ease of use. Having been a scientist myself, my ideal world is one where I am reviewing data, deciding on something innovative, or doing the work in the laboratory, not being in my ELN.
People want to get into the ELN, do what they need to do, and get out as quickly as possible. That’s where large language models and natural language interfaces come in. We have capabilities now where instead of doing 20 or 30 clicks, you can type one natural language phrase, or even speak it, and the AI can execute those actions in the ELN. For the user, that’s huge. Instead of trying to remember how they did something last week, they can just say, “Here’s what I want to do,” and the system does it. They confirm it’s correct and then go to the lab. This lowers the barrier to use and makes scientists much more efficient.
The other dimension is what we talked about earlier—moving from a passive ELN to one where you can ask intelligent scientific questions about the contents of experiments. With AI assistants, it can feel like you have a trusted advisor. Imagine that you’re a junior scientist in a chemistry lab, and you have the head of medicinal chemistry over one shoulder and the best computational scientist in the company over the other. In a perfect world, you could ask those people questions all day. In reality, you can’t. But if you can ask those questions through an AI system embedded in the ELN, that becomes a totally different experience. I wish I’d had that 30 years ago.
How do you see novel ELN technologies meeting, or exceeding, these needs?
The first way is through natural language interfaces. ELNs are complex, like any specialized software. But if you add a natural language interface, scientists no longer need to understand all the mechanics of how the system works. They just need to be able to state what they want to do. That said, scientists still must review what’s been done and make sure it’s correct. We’re not absolving anyone of responsibility for whether an experiment is designed properly or documented correctly. But if you remove that middle step, where a scientist has to remember a 50-click path to complete something, that’s a major gain in efficiency.
The second is bringing AI agents into the ELN so researchers can directly ask scientific questions about the contents of their experiments. That gives them something like a trusted advisor to work with as they build experiments and move projects forward. For the people running labs or the project leaders, the potential impact is even bigger. If you have a group of scientists that effectively has access to the equivalent of the best experts in the company, thanks to the AI agents, then the whole team gets elevated because everyone performs at a higher level. That means projects progress faster and teams can do more science.

#Rethinking #ELNs #Era