How Conserv Safeguards History: Building an Environmental Monitoring and Preventive Conservation IoT Platform

This is an installment of our “Community Member Spotlight” series, where we invite our customers to share their work, shining a light on their success and inspiring others with new ways to use technology to solve problems.

In this edition, Nathan McMinn, CTO and co-founder at Conserv, joins us to share how they’re helping collections care professionals around the world understand their collections’ environments, make decisions to optimize conditions, and better protect historical artifacts—no IT department required.

Think for a moment about your favorite museum. All of the objects it houses exist to tell the stories of the people who created them—and of whose lives they were a part—and to help us understand more about the past, how systems work, and more. Without proper care, which starts with the correct environment, those artifacts are doomed to deteriorate and will eventually be lost forever (read more about the factors that cause deterioration).

Conserv started in early 2019 with a mission to bring better preventive care to the world’s collections. We serve anyone charged with the long-term preservation of physical objects, from paintings and sculptures to books, architecture, and more. (You’ll often hear our market segment described as “GLAM”: galleries, libraries, archives, and museums.) At the core, all collections curators need to understand the environment in which their collections live, so they can develop plans to care for those objects in the short, mid, and long term.

While damage can come in the form of unforeseen and catastrophic events, like fire, theft, vandalism, or flooding, there are many issues that proactive monitoring and planning can prevent, such as mold growth, color fading associated with light exposure, and mechanical damage caused by fluctuating temperature and relative humidity.

At Consev, we’ve built a platform to help collections gather the data required to understand and predict long-term risks, get insights into their data, and plan for how to mitigate and improve the preservation environment. Today, we’re the only company with an end-to-end solution—sensors to screen—focused on preventive conservation. Collections like the Alabama Department of Archives and History and various others rely on us to develop a deep understanding of their environments.

About the Team


We’re a small team where every member makes a big impact. Currently, we’re at 6 people, and each person plays a key role in our business:

  • Austin Senseman, our co-founder and CEO, often describes his background as “helping people make good decisions with data.” He has extensive experience in analytics and decision support—and a history of using data to guide organizations to their desired outcomes.
  • Nathan McMinn (this is me!), our other co-founder and CTO, comes from the development world. I’ve spent my career leading engineering teams and building products people love, most notably in the enterprise content management sector.
  • Ellen Orr, a museum preparator turned fantastic collections consultant.
  • Cheyenne Mangum, our talented frontend developer.
  • Melissa King, a preventative conservator who recently joined to help us build better relationships with more collections.
  • Bhuwan Bashel, who is joining in a senior engineering role (oh yeah, he’ll get some TimescaleDB experience quickly!).

About the Project
What

We collect several types of data, but the overwhelming majority of the data we collect and store in TimescaleDB is IoT sensor readings and related metrics. Our solution consists of fleets of LoRaWAN sensors taking detailed environmental readings on a schedule, as well as event-driven data (see this article for an overview of LoRaWAN sensors, use cases, and other details).

So, what ends up in our database is a mix of things like environmental metrics (e.g., temperature, relative humidity, illuminance, and UV exposure), sensor health data (e.g., battery statistics), and data from events (e.g., leak detection or vibration triggers).

We also capture information about our customers’ locations - such as which rooms are being monitored - and data from human observations - such as building or artifact damage and pest sightings - to give our sensor data more context and some additional “texture.” It’s one thing to collect data from sensors, but when you pair that with human observations, a lot of interesting things can happen.

Our UI makes it simple to add a human observation to any data point, collection, or date. See our docs for more details.


For us, it is all about scale and performance. We collect tens of thousands of data points each day, and our users think about their metrics and their trends over years, not days. Also, like anybody else, our users want things to feel fast. So far, we’re getting both from TimescaleDB.

With those criteria met, our next challenge is how to use that data to (1) provide actionable insights to our customers, allowing them to ask and answer questions like “What percentage of time is my collection within our defined environmental ranges?” and (2) offer in-depth analysis and predictions, like possible mold and mechanical damage risks.

This is where we’ve gotten the most value out of TimescaleDB: the combination of a performant, scalable repository for our time-series data, the core SQL features we know and trust, and the built-in time-series functionality provided by TimescaleDB let us build new analysis features much faster.

Example of our Collections focused analytics dashboard - see our docs for more details.


Using (and Choosing) TimescaleDB


I first found out about TimescaleDB at an All Things Open conference a few years ago; it was good timing since we were just starting to build out our platform. Our first PoC used ElasticSearch to store readings, which we knew wouldn’t be a permanent solution. We also looked at InfluxDB, and, while Amazon’s Timestream looked promising, it wasn’t released yet.

Our database criteria were straightforward: we wanted scale, performance, and the ability to tap into the SQL ecosystem. After evaluating TimescaleDB’s design, we were confident that it would meet our needs in the first two categories, but so would many other technologies.

What ultimately won me over was the fact that it’s PostgreSQL. I’ve used it for years in projects of all sizes; it works with everything, and it’s a proven, trustworthy technology—one less thing for me to worry about.

Editor’s Note: For more comparisons and benchmarks, see how TimescaleDB compares to InfluxDB, MongoDB, AWS Timestream, vanilla PostgreSQL, and other time-series database alternatives on various vectors, from performance and ecosystem to query language and beyond.

Current Deployment & Use Cases

Our stack is fairly simple, standard stuff for anyone that has a UI → API → database pattern in their application. Our secret sauce is in how well we understand our users and their problems, not our architecture :).

Some of our future plans will require us to get more complex, but for now, we’re keeping it as simple and reliable as possible:

  • Node.js services running in Docker containers on AWS ECS, with TimescaleDB on the database tier
  • React.js frontend
  • Mobile app built with Flutter

In the near future, I’d like to move over to TimescaleDB’s hosted cloud offering. As we get bigger, that will be something we evaluate.

In line with the above, our queries themselves aren’t that clever, nor do they need to be. We make heavy use of PostgreSQL window functions, and the biggest features we benefit from, in terms of developer ergonomics, are TimescaleDB’s built-in time-series capabilities: time_bucket, time_bucket_gapfill, histograms, last observation carried forward, and a handful of others. Almost every API call to get sensor data uses one of those.

We haven’t used continuous aggregates or compression yet, but they’re on my to-do list!

Editor’s Note: In addition to the documentation links above, check out our continuous aggregates tutorial for step-by-step instructions and best practices, and read our compression engineering blog to learn more about compression, get benchmarks, and more.

Getting started advice & resources

TimescaleDB gives you more time to focus on end-user value, and less time focusing on things like “Can I connect tool x to my store?” or “How am I going to scale this thing?”

If you’re considering TimescaleDB or databases in general, and are comfortable with Postgres already, try it. If you want the biggest ecosystem of tools to use with your time-series data, try it. If you think SQL databases are antiquated and don’t work for these sorts of use cases, try it anyway - you might be surprised.

For anybody out there thinking about an IoT project or company, as a technologist, it’s really tempting to focus on everything before data gets to the screen. That’s great, and you have to get those things right, but that’s just table stakes.

Anybody can collect data points and put a line graph on a screen. That’s a solved problem. Your challenge is to develop all the context around the data, analyze it in that context, and present it to your users in the language and concepts they already know.

TimescaleDB can help you do that by giving you more time to focus on end-user value and less time to focus on things like “Can I connect tool x to my store?” or “How am I going to scale this thing?”

Other than those words of advice, there’s nothing that hasn’t already been covered in depth by people in the PostgreSQL community who are far smarter than I am :).

We’d like to give a big thank you to Nathan and the Conserv team for sharing their story and, more importantly, for their commitment to helping collections keep history alive. As big museum fans, we’re honored to play a part in the tech stack that powers their intuitive, easy-to-use UI and robust analytics. 💛

We’re always keen to feature new community projects and stories on our blog. If you have a story or project you’d like to share, reach out on Slack (@Ana Tavares), and we’ll go from there.