Culturomics: a low-cost opportunity to evaluate conservation campaigns

This blog post was written by Dr Diogo Veríssimo, Lead Researcher of the Biodiversity and Behavioral Team, at the Environmental Change Institute, University of Oxford, UK and Dr Gabriel Caetano, Postdoctoral Researcher at Department of Zoology, University of Brasília, Brasil.

When conservationists launch digital campaigns, we often ask: did it work? Did the videos, memes, or hashtags actually spark change? For most campaigns, the answer is: we simply don’t know. Evaluation is expensive, time-consuming, and often feels out of reach for projects with limited budgets.

But things are changing. Our recent study used culturomics—the analysis of large-scale digital data—to measure the impact of online conservation campaigns. By harnessing the digital footsteps that people leave while browsing the internet, culturomics opens up opportunities to evaluate what would otherwise be out of reach.

Why evaluation matters

Digital campaigns are now used everywhere in conservation, ranging from global efforts led by celebrities to local grassroots projects. Yet, robust evaluations are rare. A review of demand reduction campaigns for illegal wildlife products found that only a third of campaigns report any evidence around impact (or lack thereof). Moreover, most evaluations were not rigorous enough to allow for potential changes in the audience’s perceptions or behaviours to be linked back to the actual campaign.

Without evaluation, we’re left with assumptions. Did a spike in donations or likes mean the campaign worked? Or was it a coincidence? And importantly, how do we learn and improve if we don’t know what succeeded and what didn’t?

Culturomics in action

In our study, we applied culturomics to evaluate two conservation campaigns:

  • A regional campaign in India promoting the biodiversity of the Western Ghats.
  • A global campaign using humorous skits on YouTube to spark conservation conversations.
Two of the species featured in the Western Ghats videos: Nilgiri laughingthrush Montecincla cachinnans (Photo by Antony Grossy / Wikimedia Commons / CC BY-SA 4.0) and Kottigehara dancing frog Micrixalus kottigeharensis (Photo by Amatya Sharma / iNaturalist / CC BY-NC 4.0).

We used Wikipedia page views as our main dataset. The logic is simple: when people are interested in a topic, they often go somewhere to learn more about it, often to Wikipedia. Wikipedia was the seventh most accessed website in the world in 2025, being the largest educational resource on the internet in terms of reach and content volume, hosting more than 60 million articles in over 300 languages. Thus Wikipedia seemed like the obvious choice to evaluate if online campaigns are incentivizing people to learn more about biodiversity and conservation. The act of visiting a page in Wikipedia by itself does little to affect the conservation of biodiversity. However, searching for information on a topic is the first step in a chain of behaviors that could lead to change.

We compared changes in page views for topics mentioned in the campaigns against similar “control” topics that weren’t targeted, so we could see whether the campaign generated a measurable effect on Wikipedia traffic.

However, this comparison is far from straight-forward. Even pages about similar topics can show very different temporal dynamics, making it hard to tell if differences observed were the result of the conservation campaigns or other confounding factors. To address this limitation, we applied a synthetic control method—a statistical way to construct a counterfactual, or a credible estimate of what would have happened without the campaign. Importantly, this lets us go beyond before-and-after comparisons, which are notoriously misleading.

Graphical explanation of the ‘synthetic control’ method. (a) The trajectory of one treatment time series, which has been affected by an intervention, is compared to multiple control time series, which have not been affected. (b) An algorithm reweights and combines the control time series into a single ‘synthetic control’ that matches the treatment time series in the period before the intervention. (c) The synthetic control is projected into the period after intervention to approximate the trajectory of the treatment time series as if the intervention never happened. Figure reproduced from Caetano et al. (2025) under a CC BY 4.0 license.

What we found

The results were sobering. The regional campaign produced limited, localized impacts—for example, boosting engagement in local-language Wikipedia pages. But the global campaign showed no measurable effect on online engagement.

Even more striking, we found a potential negative impact for one campaign topic in India, where page views actually declined relative to the counterfactual.

This doesn’t mean the campaigns “failed” outright. Instead, it highlights how difficult it is for digital campaigns to cut through the noise, and how valuable it is to have tools that show us when they do—or don’t—shift the needle.

Effect of two outreach campaigns on Wikipedia views for 14 different topics and 10 local languages, evaluated using a synthetic control analysis. Points and associated error bars indicate the size and direction of any campaign effect. Points in black show no statistically significant campaign effect. Points in red and blue show significant campaign effects, depending on certain statistical adjustments (see original paper for full details). The languages explored were: English (en), Gujarati (gu), Hindi (hi), Kannada (kn), Malayalam (ml), Marathi (mr), Tamil (ta), Telugu (te), Tulu (tcy) and Urdu (ur). Figure reproduced from Caetano et al. (2025) under a CC BY 4.0 license.

Why culturomics is a game-changer

Culturomics won’t fully replace surveys, interviews, or field observations, which actively engage the public and can be designed to obtain much more specific information. It relies on the observation of people’s spontaneous behaviour on the internet to make inferences about their perceptions and attitudes. As in psychology or animal behaviour research, many external factors can influence this behaviour, and the motivation behind it is not always clear.

However, culturomics offers considerable advantages conservationists desperately need: scale and efficiency. A single researcher trained in culturomics can sample the online behaviour of millions of internet users across continents and languages, using a fraction of the time, money and human resources needed to do one survey at one college campus.

In short, culturomics allows us to ask questions we couldn’t afford to ask before. Did a hashtag campaign shift attention toward an endangered species? Did a viral video translate into more online searches about conservation solutions? These are now within reach.

What practitioners should keep in mind

Before rushing to apply culturomics to your own work, a few practical notes:

  • Engagement ≠ behavior change: A spike in page views or search queries doesn’t mean people changed their behavior. It’s an early signal of interest. To understand if that leads to donations, votes, or shifts in consumption, we’ll need to integrate culturomics with other methods.
  • Not all topics are equal: Our results suggest culturomics works best for localized, specific topics rather than broad global ones. Conservationists should think carefully about what digital traces their campaigns are likely to leave.
  • Transparency matters: Sharing not just successes but also failures is critical. If a campaign shows no measurable effect, that is still valuable evidence that helps the whole community learn and improve.
  • Capacity is key: While the tools are low-cost, they require digital and (advanced) statistical literacy. Partnerships with data scientists, or training in basic coding and analytics, will make a big difference.
  • Biases: While the tools are low-cost, they require digital and (advanced) statistical literacy. Partnerships with data scientists, or training in basic coding and analytics, will maDifferences in access to the internet, or state enforced restrictions to certain websites are important factors to consider when interpreting culturomics data. This can be partially addressed by segmenting the data so less represented regions or languages can be interpreted in isolation. 
  • Confounding factors: Homonyms are a big one: an apparent increase in interest for pythons, the family of large constrictor snakes, may actually reflect increased interest in the programming language of the same name. There are tools to handle these situations, which are specific to each data set, such as the “topics” feature for Google Trends or the Wikidata IDs for Wikipedia.

Looking ahead

Conservation needs more than compelling stories—it needs evidence of impact. Culturomics isn’t a silver bullet, but it’s a powerful addition to the toolbox, especially for practitioners working with limited budgets.

Imagine if every conservation campaign, no matter how small, could check whether it sparked online engagement. Over time, we’d build a collective evidence base of what works, where, and why. This would allow conservationists to stop relying on assumptions and start making evidence based decisions, improving the design and reach of future campaigns.

For a field where resources are always stretched thin, that’s an opportunity we can’t afford to ignore. Demonstrating measurable impact, even if it is just an early signal of attention, can strengthen the case for continued support and investment in conservation outreach.

Leave a Reply

Orangutani sumaterští Diri a Mawar

Creating a global evidence map for animal management in zoos

Can we use perches to attract birds and increase seed dispersal in degraded areas?

A win for farmers, bees, and a threatened bird