Remote Sens Ecol Conserv - 2022 - Moeller
Remote Sens Ecol Conserv - 2022 - Moeller
Remote Sens Ecol Conserv - 2022 - Moeller
Keywords Abstract
camera trap, capture probability, motion
sensor photography, time lapse photography, A suite of recently developed statistical methods to estimate the abundance and
viewable area, viewshed area density of unmarked animals from camera traps require accurate estimates of
the area sampled by each camera. Although viewshed area is fundamental to
Correspondence achieving accurate abundance estimates, there are no established guidelines for
Anna K. Moeller, Wildlife Biology Program, collecting this information in the field. Furthermore, while the complexities of
University of Montana, 32 Campus Drive,
the detection process from motion sensor photography are generally acknowl-
Missoula, MT 59812. E-mail:
[email protected]
edged, viewable area (the common factor between motion sensor and time
lapse photography) on its own has been underemphasized. We establish a com-
Editor: Marcus Rowcliffe mon set of terminology to identify the component parts of viewshed area, con-
Associate Editor: Rahel Sollmann trast the photographic capture process and area measurements for time lapse
and motion sensor photography, and review methods for estimating viewable
Funding Information area in the field. We use a case study to demonstrate the importance of accu-
Funding for data collection and
rate estimates of viewable area on abundance estimates. Time lapse photogra-
administration of the case study was
provided by Montana Fish, Wildlife and
phy combined with accurate measurements of viewable area allow researchers
Parks, the Mule Deer Foundation, and the to assume that capture probability equals 1. Motion sensor photography
Rocky Mountain Elk Foundation. Other requires measuring distances to each animal and fitting a distance sampling
support for the authors was provided by curve to account for capture probability of <1.
University of Montana and Oklahoma State
University.
doi: 10.1002/rse2.300
152 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.
This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and
distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A. K. Moeller et al. Camera-Based Abundance Estimation
capture-recapture statistical approaches (e.g., Karanth & and provide recommendations for future application of
Nichols, 1998). However, the use of cameras for monitor- viewshed density estimators.
ing abundance became more widely feasible with subse-
quent development of sampling and statistical approaches
Definitions
to monitor abundance of unmarked wildlife populations
(Gilbert et al., 2021). We begin by establishing a consistent set of terms, defin-
Today, estimating abundance of unmarked populations ing how they apply to different types of photography and
remains a major focus in camera trap research, and several density estimators, and breaking apart the component
methods have emerged for translating observations of ani- parts that can change across time and space (Table 1).
mals from cameras into estimates of abundance (reviewed We use the term viewshed most broadly to refer to the
by Gilbert et al., 2020). A subset of these methods relate various delineations of the area in front of a camera trap,
animal observations to the space directly sampled by each with different specific definitions for different types of
camera’s viewshed, and they result in viewshed density esti- photography (time lapse and motion sensor). Time lapse
mates that can be extrapolated to abundance within photography occurs when camera traps are programmed
broader sampling frames (Gilbert et al., 2020). We refer to to take photos at regular, predefined time intervals (e.g.,
these here as “viewshed density estimators,” and they every 10 min), whereas motion sensor photography occurs
include the random encounter model (REM; Rowcliffe when a passive infrared (PIR) sensor triggers the camera
et al., 2008), the random encounter and staying time model trap to take a picture.
(REST; Nakashima et al., 2018), camera trap distance sam- For time lapse photography, the viewshed is equivalent
pling (CTDS; Howe et al., 2017), and time to event and to viewable area. When a picture is taken, viewable area is
space to event (TTE and STE; Moeller et al., 2018). the amount of landscape that can be seen by an observer
The viewshed density estimators are all sensitive to mea- of the photo at a resolution sufficient to identify the tar-
surements of the sampled area in front of cameras (e.g., get species. Viewable area is affected by viewable angle
Santini et al., 2022). Even when cameras are carefully (the horizontal angle or field of view captured by the
placed to representatively sample a region of interest, these camera lens) and viewable distance (how far an observer
cameras sample only a tiny fraction of the total study area. can reliably see and identify species) (Fig. 1A). Viewable
As an example, consider 100 cameras placed in a 500 km2 angle and distance for a given camera trap can vary with
study area. If each camera viewshed measured 157 m2 camera make and model, terrain and vegetation obstruc-
(45°, 20 m radius), then the collective area sampled by the tions, daylight versus nighttime flash lighting considera-
cameras would be 15 700 m2, or 0.0157 km2, which is only tions, and other field conditions (Moll et al., 2020). It is
0.0031% or 1/31 847 of the total study area. Thus, small important to note that viewable area is not only defined
changes in camera viewshed area will have large conse- by camera and landscape characteristics, but also by char-
quences for density estimates that are extrapolated across a acteristics of the observer, such as the individual’s level of
study region that is much larger than the collective view- experience or attention to detail. The details of animals in
shed areas. Specifically, for a given dataset, a proportional photographs are harder to see the farther they are from
increase in measurement of a camera’s sampled area will the camera, so observers will vary in their ability to pick
result in a corresponding proportional decrease in the den- up those details. This means that the observer – whether
sity estimate (Cusack et al., 2015). Proportionally, view- human or artificial intelligence – is an integral part of the
shed density estimators are no more sensitive to definition of viewable area. Finally, the size and character-
mismeasurement than any other area-based samplers (e.g., istics of the animal itself can contribute to viewable area;
point counts, strip transects, etc.), but small areas provide larger animals and those that are easiest to identify can be
less leeway for error. For example, a 10% mismeasurement recorded at farther distances than small animals that
of sampled area will result in a 10% bias of the extrapo- could be confused with other species.
lated estimate for all methods, but the same absolute For motion sensor photography, the viewshed is defined
amount of mismeasurement (e.g., 10 m2) makes up a lar- by the intersection of viewable area with trigger area. Trig-
ger percentage of error for small areas than for large areas. ger area is the area defined by the trigger angle and trigger
Although all viewshed density estimators share a funda- distance within which PIR sensors can detect infrared radia-
mental component – the area sampled by cameras – tion and trigger the camera to take a picture (Fig. 1B).
explicit definition of this viewshed area, how it varies Motion sensor photography occurs when a PIR sensor
across methodologies, and how to measure it in each case detects sufficient changes in infrared radiation (due to
remain intractable and variable across the literature. Here, motion of an object) within the trigger area and triggers
we define the relevant area for viewshed density estima- the camera shutter (Welbourne et al., 2016). To take a
tors, review methodologies for measuring viewshed area, photo of an animal by motion sensor photography, the PIR
ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London. 153
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Camera-Based Abundance Estimation A. K. Moeller et al.
Term Definition
Capture probability The probability that an animal is captured in a photo given it is present in the camera’s viewshed. For motion sensor
photography, capture probability is the joint probability that the motion sensor triggers the camera shutter and the
animal registers within the photo and an observer correctly identifies the species, given something passes through the
trigger area. For time lapse photography, capture probability is the probability that an observer correctly identifies an
animal given it is in the viewable area.
Detection probability The probability that an animal in the broader study area encounters a camera and is captured in a photograph.
Detection probability is the product of encounter probability and capture probability.
Encounter probability The probability that an animal will pass through the viewshed of a camera given the animal is present in the study area.
Motion sensor A camera function that takes photos when motion is detected in the trigger area. Infrared radiation differences
photography between the animal and its surface environment trigger the camera to take a photograph. This is the most common
means by which animal observations are gathered from camera traps in wildlife research.
Passive infrared (PIR) A sensor on camera traps that detects differences between infrared radiation of the animal in the trigger area and the
sensor infrared radiation of the surface of the environment. This sensor is sometimes called a “motion sensor” since an
animal that moves into a camera’s trigger area causes a difference in radiation amounts emitted to the sensor, thus
triggering the camera.
Registration angle The smaller of the viewable angle or the trigger angle, which is the angle of the area in which trigger area and
viewable area overlap.
Registration area The intersection of the trigger area and viewable area. Registration area is only applicable to motion sensor
photography.
Registration distance The smaller of the viewable distance or the trigger distance, which is the distance of the area in which trigger area and
viewable area overlap.
Time lapse A setting on some camera traps that sets a schedule for photos to be taken of the camera’s viewshed at regular time
photography intervals. Currently, this function is only available on certain models of camera traps.
Trigger angle The horizontal angle of the camera trap’s PIR sensor. This is often wider than the lens angle to account for any delays
between activation of the PIR sensor and the camera taking a photograph while an animal continues moving.
Trigger area The area in which PIR sensors can detect infrared radiation and trigger a motion sensor camera to take a picture.
Trigger distance The maximum distance at which a difference in infrared radiation between an animal and its surface environment can
be sensed by a camera’s PIR sensor. This can be variable, depending on factors such as environmental heterogeneity,
animal speed, and the configuration of the PIR sensor. Trigger distance also decays with distance from the camera’s
sensor.
Viewable angle The horizontal angle of the camera trap’s lens, which determines how wide of an image the camera takes.
Viewable area The portion of the ground defined by the viewable distance and viewable angle in which animals can be reliably
identified by an observer.
Viewable distance The maximum distance in front of the camera at which animals can be reliably identified in photographs.
Viewshed A nonspecific term referring to the sampled area in front of a camera. We use this term to refer to both registration
area (for motion sensor photography) and viewable area (for time lapse photography).
sensor must detect motion in the trigger area and the ani- the functional registration area realized for a given study
mal must be within the camera’s viewable area. As (Hofmeester et al., 2019; McIntyre et al., 2020).
described by Findlay et al. (2020), each of these steps comes
with some level of probabilistic uncertainty regarding
Viewshed Area and Capture
whether motion will lead to a trigger and whether that trig-
Probability
ger will lead to an animal registering within the photo. Fol-
lowing Findlay et al. (2020) we dub the area where the All viewshed density estimators require accurate measure-
viewable area and trigger area intersect (i.e., where an ani- ment of the viewshed area to yield accurate density esti-
mal will both trigger the sensor and register within the mates and reliable extrapolation of density to abundance;
viewable area of the photo) to be the registration area, thus, the concept of capture probability is a critical initial
which can be defined by a registration angle and registration consideration. The terms capture probability and detection
distance. While the intersection of trigger and viewable probability have often been used interchangeably with
areas may represent the maximum possible registration cameras because camera data are used for many different
area, relative variation in the species’ characteristics such as analyses whose original terms may diverge (e.g., capture-
movement speed and body size as well as the speed of cam- recapture and occupancy). Following the definitions pro-
era triggers may cause further reduction or heterogeneity in posed by Findlay et al. (2020), we distinguish capture
154 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A. K. Moeller et al. Camera-Based Abundance Estimation
Figure 1. Visualization of viewable area (blue), trigger area (pink), and registration area for motion sensor photography (purple). (A) The
viewshed area for time lapse photography is equivalent to the viewable area; (B) the viewshed area for motion sensor photography is the
intersection of viewable and trigger areas, known as the registration area.
probability and detection probability. Capture probability is three components; it is the probability that a species is
the probability that an animal in front of a camera is cap- correctly identified by an observer, given that it is in the
tured (i.e., an identifiable photo of the animal is taken, viewable area (Moeller & Lukacs, 2021).
given the animal is in front of the camera). Detection proba- Error in any of the components of capture probability
bility is the probability that an animal in the broader study would result in a capture probability of <1. A capture
area encounters a camera and is captured. Detection proba- probability <1 would inherently bias viewshed density
bility is the product of capture probability and encounter estimators low if naı̈ve analyses were conducted without
probability and therefore incorporates animal movement correction. Fortunately, practitioners of camera trap stud-
within the broader study area to the microsite of the cam- ies founded on viewshed density have developed four
era. Unlike occupancy or capture-recapture, which assume approaches for addressing this potential source of bias.
that cameras sample grid cells or animal populations and Interestingly, capture probability and viewshed area are
therefore must account for animals that do not reach the fully enmeshed; the methods that correct for capture
camera, viewshed density estimators essentially work by probability <1 also define viewshed area.
averaging independent density estimates at different cam- First, camera trap distance sampling incorporates a decay
era viewsheds. Therefore, the sampling unit is the camera’s function that parameterizes the reduction in capture proba-
viewshed, not individual animals, and only capture proba- bility with increased distance between the animal and the
bility – not encounter probability or detection probability camera and corrects the density estimator accordingly
– is relevant as defined. Viewshed density estimators (Cappelle et al., 2021; Howe et al., 2017). The decay func-
assume that cameras are representative of the study area tion is empirically estimated for a given study using the dis-
(following sampling theory principles) and cause no behav- tribution of distances associated with each photographic
ioral response (i.e., trap attraction or avoidance). Viola- capture of a species during the study period (e.g., Harris
tions of these assumptions would bias density estimates. et al., 2020). Designed for motion sensor photography,
For the purposes of this paper, we will assume that these camera trap distance sampling inherently includes the inte-
model assumptions are met, although we revisit these grated effects of viewable area, trigger area, and species-
assumptions in the discussion. specific traits such as speed or body size. Based on the fitted
Capture probability can take multiple forms depending decay curve, the user will choose a truncation distance
on the sampling design (Findlay et al., 2020). For motion beyond which any observations of animals are right trun-
sensor photography, capture probability is the joint prob- cated, and this serves as the registration distance (Howe
ability that the motion sensor triggers the camera shutter et al., 2017). While the decay in capture probability with
and the animal registers within the photo (i.e., is present registration distance is modeled, similar decays with regis-
in the viewable area) and an observer correctly identifies tration angle have been shown in other studies but are not
the species, given an animal or some other object (such as parameterized with camera trap distance sampling (Row-
vegetation) passes through the trigger area (Findlay et al., cliffe et al., 2011). Instead, manufacturer specifications for
2020; Moeller & Lukacs, 2021). For time lapse photogra- registration angle (i.e., the smaller of the trigger angle and
phy, capture probability reduces to only one of these viewable angle) are typically used as inputs for registration
ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London. 155
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Camera-Based Abundance Estimation A. K. Moeller et al.
angle (Howe et al., 2017), and variation in capture proba- restricting data collection to a smaller subset of the regis-
bility across angles is averaged together and attributed to tration area necessitates the loss of data collected outside
distance alone. In cases where capture probability is func- of this viewshed definition, it facilitates much better adher-
tionally 0 at the outermost portions of the registration ence to the assumption of perfect capture probability.
angle prescribed by manufacturer specifications (e.g., the However, due to the uncertainties inherent in motion sen-
smallest species studied by Rowcliffe et al., 2011), a slight sor data, it may never be possible to define an area where
negative bias in density estimates may be expected when capture probability equals 1.
using manufacturer specifications, though this has not been A fourth approach is to eliminate uncertainty due to
demonstrated to date. motion sensors altogether by using time lapse photogra-
A second approach to addressing imperfect capture phy to collect systematically scheduled images. In this
probabilities from motion sensor data is to estimate “effec- form of sampling, capture probability is the probability
tive” capture distance (also called effective detection dis- that an observer can identify an individual given it is
tance or EDD) and effective capture angle for input into within the viewshed at the time a photograph is taken.
viewshed density estimators such as the random encounter Thus, viewshed area under time lapse photography is
or time-to-event models (Hofmeester et al., 2017; Rowcliffe equivalent to the viewable area of the camera lens. Time
et al., 2011). Built upon the ideas of distance sampling, the lapse photography removes many of the factors affecting
effective capture distance and angle are the single thresh- capture probability that need to be accounted for with
olds in distance and angle where the number of captures motion sensor photography, such as animal approach
missed at closer distances or narrower angles is equal to the angle and speed (Hofmeester et al., 2019). Each factor
number of captures recorded at farther distances or wider that affects capture probability in time lapse photography
angles. Like the distance sampling approach, this approach affects the viewable area alone, so there are two ways to
builds probability functions based on distances and angles effectively ensure that capture probability is 1. First, the
of animal captures to quantify the rate of decrease in cap- observer can establish a maximum viewable distance and
tures with increasing distance and angle from the centerline angle common to all photos at a camera and then trun-
of the viewshed. In contrast to distance sampling, the fitted cate observations of animals outside that zone. Unlike
functions are then used to estimate the single value of effec- with motion sensor photography, this method can reliably
tive distance or effective angle using techniques akin to produce a capture probability equal to 1 because there is
measuring effective strip width in standard distance sam- no additional uncertainty created by motion sensors. Sec-
pling (Buckland et al., 2001; Hofmeester et al., 2017). ond, the observer can measure viewable distance across
When effective distances and angles are entered into den- time by using either landmarks at known distances (e.g.,
sity viewshed estimators, the estimates should be unbiased Hofmeester et al., 2017) or artificial intelligence (Haucke
as if capture probability were equal to 1. et al., 2022). Multiple factors affect viewable area, such as
A third approach to accounting for imperfect capture physical obstructions, weather, and time of day, and their
probability from motion sensor photography is to use field effects on density estimation have not been quantified in
tests or other means to estimate maximum distances and the context of time lapse photography.
angles within which capture probability can be assumed to Motion sensor capture probability has been explored in
be 1. In some examples, trials were conducted wherein a a variety of ways, but there are no established best practices
human or domestic animal similar in size to the target for measuring viewable area on its own, even though it is a
wildlife species approached the camera from various angles critical component of both motion sensor and time lapse
and distances and the distance and angle of first capture viewshed areas. Furthermore, definitions of viewable area
were recorded (Cusack et al., 2015; Manzo et al., 2012). In in the literature have typically been simplistic measure-
these examples, the authors calculated the mean distance ments of circular sectors (e.g., Moeller et al., 2018), and
or angle of first capture from 10 or more trials and used there is a need to further identify and address factors that
these values to define registration area. However, using a reduce viewable area in studies applying viewshed density
value of central tendency such as the mean distance of first estimators. We review considerations and methods for esti-
capture suggests that in roughly half the trials the animal mating viewable area and recommend best practices for
was not yet captured by the time it reached the mean dis- considering it in viewshed density estimators.
tance, so capture probability would still be <1. Instead,
while applying the REST model to estimate viewshed den-
Measuring Viewable Area
sity, Nakashima et al. (2018) used field trials to identify
the central portion within the registration area within Calculating viewable area is a problem of geometry that
which capture probability was 1, and they truncated their can be broken down into discrete steps that depend on
data to retain only captures made in that area. While how cameras are positioned during setup. The most
156 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A. K. Moeller et al. Camera-Based Abundance Estimation
common way to deploy a trail camera is parallel to the measured. However, this approach has rarely been applied
ground, with the camera’s viewable area, a, described as a in camera trap research, possibly because of the greater
circular sector (Moeller et al., 2018; Rowcliffe et al., 2008): effort required to deploy elevated cameras and a perceived
decrease in animal captures (Ausband et al., 2022,
θ although see Jacobs & Ausband, 2018).
a ¼ πr2 (1)
360 The calculation of viewable area for either camera
setup requires accurate measurements of the relevant
where r is the viewshed radius and θ is the viewshed angle parameters in the field (i.e., s1, s2, and h for the elevated
in degrees (Fig. 2A). An alternative, less common camera camera setup and θ and r for the circular sector setup).
setup positions the camera at a high attachment point The estimation of the trapezoidal viewable area of an
angled toward the ground, creating a trapezoidal viewable elevated camera is easier to calculate of the two camera
area (Loonam et al., 2021) (Fig. 2B). This deployment setups. The researcher can measure s1, s2, and h by view-
approach can be used to decrease camera theft and dam- ing photos during deployment and identifying the outer-
age (Jacobs & Ausband, 2018). For the elevated camera most viewable points on the ground. These points form
setup, the trapezoidal area is calculated by: the edges of the trapezoid which can be measured on
s1 þ s2 the ground with a tape measure (Fig. 2B). Due to the
a¼ h, (2) logistical challenge of repeatedly climbing up to the cam-
2
era, this approach is most easily implemented with two
where s1 is the width of the ground viewable at the base people.
of an image, s2 is the width of the ground at the top of When measuring the area of a circular sector, the sim-
an image, and h is the perpendicular distance between the plest way to determine the viewable angle θ is from the
two. In contrast to the circular sector, the trapezoidal manufacturer’s specifications of the lens angle, which typ-
deployment strategy results in a viewable area with defini- ically falls between 35° and 55°, depending on camera
tive end points and therefore is more easily defined and make and model (TrailcamPro, 2021). It is important not
Figure 2. Viewable area geometries for two common camera setups. (A) The camera is set up parallel to the ground, typically at a lower height
(indicated by down arrow), and the viewable area is a circular sector, defined by distance r and angle θ; (B) the camera is attached at an elevated
position (indicated by up arrow) and pointed steeply at the ground, creating a trapezoidal viewable area defined by two parallel sides (s1 and s2)
and the perpendicular distance between them (h).
ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London. 157
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Camera-Based Abundance Estimation A. K. Moeller et al.
to confuse the viewable angle with the trigger angle, an appropriate approach so long as both camera viewable
which for most modern camera models is wider than the area and the study area are measured as if they were flat.
viewable angle of the lens (TrailcamPro, 2021). If camera This requires that distances are measured as flat-ground
specifications are unavailable, θ can be calculated by trig- distances with a rangefinder or similar technique.
gering the camera to take a photo, identifying landmarks Vegetation, rocks, and topography can cause obstruc-
on the outer reaches of the photo, then measuring the tions that restrict a camera’s viewable distance. If obstruc-
angle between the landmarks on the ground with a com- tions are not factored into the calculation of viewable
pass. This may be time-intensive but could be accom- area, the estimated viewable area will be too large, which
plished prior to camera deployment if the deployed area will lead to abundance estimates that are biased low. To
is truly flat with an unobstructed view. account for such obstructions, one solution is to divide
The viewable distance (r) of the circular sector is the circular sector into multiple, smaller sectors with
defined as the distance to which an observer can identify smaller angles and measure the viewable distance in each
animals. In the simplest case of a field deployment on flat sector using a rangefinder or tape measure in the field
ground with an unobstructed view, the camera’s viewable (Fig. 3C) (Idaho Department of Fish and Game, 2018).
distance extends as far as the pixel size shows enough Alternatively, recent advances in artificial intelligence
detail for the animal to be identified (Fig. 3A). If the could be used to determine distances to objects in front
camera does not capture the ground directly in front of it of the camera. For example, Haucke et al. (2022)
(due to deployment height and study species size), the designed a program to measure distances to animals from
viewable distance may begin a short distance away from a camera, which potentially could work just as well to
the physical location of the camera. The observer’s ability measure distance to obstacles that dictate viewable area.
to identify animals may decay with distance, which would In addition to static factors like topography and
lead to underestimates of abundance if ignored. To obstructions, temporally variable factors have the poten-
account for this decay, the researcher could use some of tial to change viewable area. For instance, weather such as
the previously identified techniques to account for imper- fog and snow may cover the camera’s lens and reduce or
fect capture probability, including formulating distance completely restrict r and θ or s1, s2, and h. More consis-
sampling for time lapse photography, estimating effective tently, viewable area likely changes with the time of day.
capture distance, or truncating r beyond which animal Commonly, viewable area is reduced at night, although
observations are not recorded for analysis. In the last of this may depend on the type of illumination used by the
these, the cutoff distance (r’) is defined as the maximum camera (e.g., white flash or infrared) and the flash
distance where animals can be correctly identified in all strength. The viewable area can be reduced dramatically
photos, which will depend on the target species’ size and at night if reflective vegetation close to the camera ren-
identification characteristics (Fig. 3B). Once defined, the ders the background completely dark. As another exam-
cutoff distance should be marked with flagging, posts, or ple, the orientation of the camera may allow the sun to
other identifiable features, and animals beyond the line shine directly into the camera early or late in the day,
should be ignored during photo processing. which reduces image quality and the distance animals can
Equations 1 and 2 assume that cameras are deployed in be identified in the photographs. Finally, in some systems,
flat terrain. However, this is rarely realistic, and topography vegetative characteristics change with the seasons, poten-
creates a 3-D landscape with additional surface area. Abun- tially altering the viewable area over longer camera
dance estimates are commonly extrapolated from density deployments that might start with leaves on but end after
estimates without taking surface area into account. This is leaves fall. To measure viewable area that changes over
Figure 3. Alternative geometries for measuring a circular sector when viewsheds are partially blocked. (A) An example of an obstruction in the
circular sector’s viewable area, with nothing done to account for it. Viewable area is overestimated, which will result in abundance and density
estimates biased low; (B) the cutoff distance (r’) is decreased to the closest obstacle, but θ remains constant; (C) the area is divided into multiple
circular sectors, each with angle θ’, and the distance r or r’ is measured for each.
158 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A. K. Moeller et al. Camera-Based Abundance Estimation
time, it may be necessary to have landmarks like poles or viewshed into 6 sectors of equal angles. Within each sec-
flagging at known distances or use artificial intelligence. tor we documented all vegetation or topography features
that obstructed visibility and measured their respective
distances to the nearest meter. We then estimated view-
Case Study
shed area according to the proportion of each sector that
Because viewable area is reduced by obstructions, we was visible at each distance, and we treated viewshed area
demonstrate the practical importance of viewable area per site as constant over time. Visibility generally declined
measurements on animal density estimates using a case with increased distances from the camera as well as at the
study from western Montana. We used time lapse pho- margins of the viewshed (Fig. 4B). The mean viewable
tography to estimate ungulate densities during winter (21 area was 255 m2, but they ranged from 74 to 328 m2
December 2019–20 March 2020), within a 252.7 km2 win- across all cameras (Fig. 4C). Only a single site yielded the
ter range study area of mixed forest and grassland. We maximum viewable area of 328 m2 without any viewshed
deployed 98 cameras (models Hyperfire 2 [n = 63] and obstructions.
Hyperfire 1 [n = 19], Reconyx, Holmen, WI; model We estimated density of two ungulate species, mule
119975C [n = 16], Bushnell, Overland Park, KS) at loca- deer (Odocoileus hemionus) and white-tailed deer (O. vir-
tions identified using generalized random tessellation ginianus), using STE analysis, following Moeller and
stratified sampling (Fig. 4A). Cameras were programmed Lukacs (2021). We applied two treatments of viewable
to take pictures every 5 or 10 min across the full study area to our analysis: (1) we assumed capture probability
period, and motion-triggers were disabled for all cameras. was equal to 1 within the entire viewshed, out to the
Some cameras failed or were compromised prior to the maximum radius of 30 m, and applied the uncorrected,
completion of the study at which point data were cen- maximum viewable area (328 m2) to all camera sites, and
sored from analyses. Data were aligned across cameras by (2) we used our field-based measurements of viewable
subsampling to a 10-min time lapse sampling period area to correct estimates for viewshed obstructions that
(0:00, 0:10, 0:20, etc.) for analyses. reduced capture probability within portions of the view-
At each camera site, we established a maximum view- shed. We censored all observations of deer beyond the
able radius of 30 m, corresponding to the nighttime flash maximum radius from all analyses using field-based
distance of these camera models, and then divided the markers to delineate the camera viewshed boundary
Figure 4. Methods and results of our case study in western Montana. (A) Camera locations chosen by generalized random tessellation stratified
sampling. (B) The proportion of each viewshed sector visible, average across sites. (C) A histogram of the viewable area of each camera, as
determined in the field. (D) Estimates and 95% confidence intervals of white-tailed deer and mule deer abundance from a space to event analysis
using either the uncorrected camera area (a single, maximum viewable area applied to all cameras) or corrected camera area (measured for each
camera in the field).
ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London. 159
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Camera-Based Abundance Estimation A. K. Moeller et al.
within pictures. The resulting data set included 893 and scenario, motion in the trigger area can produce photos
651 images of white-tailed and mule deer, respectively. of animals beyond the registration area, which must be
We estimated density and variance from the exponential ignored for accurate results. To demonstrate this, imagine
likelihood (Moeller et al., 2018). We extrapolated density a herd of deer present outside the registration area but
estimates to total abundance within the entire winter inside the viewable area. Under normal circumstances,
range study area according to the full area of that study nothing triggers the camera and there is no photo record
area boundary. of these deer (Fig. 5B). However, motion in the trigger
Uncorrected abundance estimates were 781 (95% CI area (whether by another individual of the herd, an indi-
732–832) white-tailed deer and 526 (487–569) mule deer vidual of a different species, or vegetation) will cause a
within the study area, assuming maximum viewshed area photo to be taken and the herd suddenly becomes visible,
across sites. After including field-based measurements of purely by accident (Fig. 5D). If the entire herd is included
the effects of vegetation and topography on viewshed area, in the analysis, density estimates will be biased high
corrected abundance estimates were 1006 (95% CI 944– because animals outside the registration area were
1073) white-tailed deer and 678 (627–733) mule deer included in the data. Furthermore, because this process is
(Fig. 4D). When assuming capture probability was equal inconsistent over time (animals in the viewable-only area
to 1, viewshed area estimates (i.e., 328 m2) were 29% lar- are sometimes included and sometimes not), it has the
ger than the average field-measured area across sites (i.e., potential to cause problems for fitting capture probability
255 m2, range: 74, 328). Correspondingly, area-corrected curves as needed for camera trap distance sampling and
abundance estimates were 29% higher than uncorrected effective capture distance. This example highlights the
estimates for both deer species. While we do not have con- need to exercise extreme caution when using motion sen-
current population estimates directly aligned with this area sor photography; observations outside the registration
and study period for calibration, agency biologists did area should always be excluded.
observe minimum mule deer counts of 331 and 360 during The decision to use time lapse photography or motion
two aerial surveys in April 2019 that covered a subset sensor photography often comes down to a variety of
(63%) of this study area. Because these estimates did not tradeoffs (Table 2). Time lapse photography provides a
correct for imperfect detection (sightability estimates aver- perfect record of camera functionality, and given the right
aged 57%–76% for mule deer spring surveys elsewhere in setup (e.g., posts or flagging to mark known distances in
Montana; Mackie et al., 1998) and excluded deer within the photo or artificial intelligence software that can esti-
the remaining 37% of the study area, it suggests that true mate distances to landmarks), viewable area can be calcu-
abundance of mule deer exceeds our uncorrected estimate lated from the photos at all times. Time lapse
of 526 and may be closer to our corrected estimate. photography results in a capture probability equal to or
very near 1, but practitioners may feel concerned about
its potential to produce few photos of the study species
Discussion
and large numbers of photos of no animals. However, the
Clear definitions of viewable area, trigger area, and registra- sampling unit of interest for density estimation is the
tion area bring to light some practical considerations that viewshed area, not the animals themselves. By recording
may not be intuitive. First, because lens angle is smaller what are commonly referred to as “empty” photos, time
than trigger angle on many camera models and registration lapse photography collects “true” zeroes, and creates a
angle is defined as the intersection of the two, registration complete presence-absence dataset at a given point. Of
angle will often need to be defined by the manufacturer- course, for low-density species, it is possible that time
specified lens angle rather than by the manufacturer- lapse imagery could fail to detect the species at all, result-
specified trigger angle or by a walk test conducted by the ing in no estimate. On the other hand, when motion sen-
user. Second, measurements of trigger distance and trigger sor photography is used, the same number of cameras is
angle should never be used in calculations of viewable area, needed to collect a representative sample of the study
so walk tests are not an appropriate tool when the focus is area, but repeated observations of individual animals are
on viewable area. It is important to think about viewable needed to fit distance sampling curves to correct density
area and registration area as separate entities and measure estimates for imperfect capture probability caused by
the relevant components of each. motion sensors. Therefore, motion sensor photography
An additional example of unintuitive considerations might produce more photos of the study species but
arises when motion sensor photography is used and view- require a hard-to-fit capture probability function to make
able distance is longer than trigger distance (Fig. 5). This up for complex and imperfect capture probability. Addi-
is the case for cameras deployed parallel to the ground in tionally, for low-density species, motion sensor photogra-
open landscapes, such as food plots or grasslands. In this phy could still fail to produce sufficient observations to
160 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A. K. Moeller et al. Camera-Based Abundance Estimation
Figure 5. Patterns of photographic capture with motion sensor photography. (A) An animal enters the registration area, and a photo is taken
with some probability (i.e., capture probability). This scenario is of motion sensor photography working as intended. (B) Animals are present in the
viewable area only and nothing enters the trigger area, so no photo is taken. (C) An animal enters the trigger area but not the registration area,
so there is some probability a photo is taken, but no animal will register in it. (D) An animal enters the trigger area, which causes a photo to be
taken with some probability, and animals in the viewable-only area are captured by accident.
fit the probability function and therefore fail to provide a that photographic captures decay only with distance may
robust estimate of abundance. The history of camera trap not be sufficient because portions of a viewshed can be
technology has led to motion sensor photography being blocked by vegetation, landscape features, or topography.
the default choice for every type of study, including abun- Some viewshed density estimators currently allow for
dance and density estimation. Rather than simply choos- viewsheds that change between cameras and over time
ing the default methodology, researchers should critically (e.g., STE and TTE). Other viewshed density estimators
assess whether the information gained from a greater may need to be reformulated slightly or use time-varying
number of species observations is an adequate tradeoff covariates in the fitted decay function to account for
for the information required to fit a probability function. viewable area that changes over time.
Although motion sensor photography has deficiencies, it As our case study illustrated, viewshed area measure-
may be the only option in certain cases. Currently, not all ments have proportional effects on animal density esti-
camera models have the capability of taking time lapse mates. The assumption that the entirety of a viewshed is
photos, and some of the viewshed density estimators sampled will result in abundance estimates that are biased
(e.g., REM and TTE) require cameras to approximate low in proportion with the amount of unobservable space
continuous sampling to meet model assumptions. in front of cameras, such as was observed in our estima-
It is important to measure area for each camera sepa- tion of deer abundance in northwest Montana. Field mea-
rately because viewshed area can be highly variable surements of viewshed area can be used to correct for
between cameras due to location, camera model, and this bias. To obtain reliable estimates of animal density
deployment setup (Fig. 4C). Furthermore, the assumption across a variety of field conditions, we recommend the
ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London. 161
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Camera-Based Abundance Estimation A. K. Moeller et al.
Table 2. Tradeoffs for motion sensor and time lapse photography according to their relative advantages (orange) and disadvantages (red).
Camera model Standard on all camera traps Currently only available with certain models of
cameras
Capture probability Complex: Simple: observer must identify animal
PIR sensor must trigger photo + Animal
captured in photo +
observer must identify animal
Factors affecting capture probability Many Few
Certainty of absence If no photo exists, cannot be certain if no animal Photos are always taken, so absences are certain
entered registration area or camera and observers can identify times of camera
malfunctioned. malfunction
Viewshed components Trigger area + Viewable area = Registration area Viewable area only
Measuring viewshed area Complex: the most accurate methods are data- Simple: defined by the viewable area and (most
intensive and time-consuming easily) a cutoff distance
Expected number of animal observations Greatest potential for obtaining observations of Observations of study species only when animals
study species are in the viewable area at the scheduled time
Relevance of data for other research Data can be used with other density estimators, Has only been applied to density estimation with
questions as well as to investigate movement, behavior, the STE model (but has the potential to be
occupancy, and competition applied to other research questions)
same degree of care is devoted to measuring the sampled sampled has no bearing on the bias or precision of the
area of viewsheds that is devoted to counting the animals density estimate. Animals may be unevenly distributed
within them. Specifically, viewsheds should be discretely throughout the study area (perhaps due to habitat hetero-
defined and measured in the field or with artificial intelli- geneity), so it is important to deploy cameras using rigor-
gence, and photographic captures should be recorded ous sampling design to capture a representative sample of
with strict adherence to those viewshed boundaries. Fac- the study area, thereby ensuring a statistically unbiased
tors that affect viewshed area such as vegetation, topogra- estimate. Although any one camera may have a high or
phy, and daylight should be addressed when measuring low number of animal visits (i.e., differential encounter
the area sampled and identifying animal captures. probability at different cameras due to heterogeneous ani-
Viewshed density estimators are sensitive to mismea- mal density), the overall estimate will be unbiased if the
surements of area (Cusack et al., 2015). Although this is sampling design is unbiased. Furthermore, with represen-
proportionally no different than other area-based sampling tative sampling, precision is derived from the number of
methods, the effect may be dramatic because viewshed cameras deployed and the length of time they sample, not
density estimators sample very small portions of a study the proportion of area covered. As heterogeneity in ani-
area and extrapolate estimates to very large sampling frame mal distribution increases, more cameras are necessary to
areas. As an extension of this principle, the same magni- achieve the same level of precision.
tude of mismeasurement (e.g., undermeasuring viewshed In addition to capture probability and viewshed area,
distance by 1 m) results in proportionally more error for violations of model assumptions can also influence abun-
small camera areas than large camera areas. For example, dance estimates. For example, behavioral avoidance or
mismeasuring a 5 m viewshed distance as 4 m underesti- attraction to the camera (i.e., trap shyness or trap happi-
mates area by 36%. However, mismeasuring a 10 m view- ness) would result in biased estimates. Therefore, cameras
shed distance as 9 m only underestimates camera area by should not be baited or deployed preferentially at high-
19%. This means that for the same magnitude of mismea- use areas to maximize the number of photographic obser-
surement, density and abundance estimates will be less vations. Additionally, the methods we reviewed and
biased for large viewsheds than small viewsheds. Thus, par- described only account for animals that use two-
ticular care should be taken to estimate the areas of small dimensional space. They do not take into account animals
viewsheds accurately. In open landscapes, the viewable area that use three-dimensional space, such as climbing trees
is much larger than the registration area, so it can be above the camera’s visibility or burrowing underground
advantageous to use time lapse photography and take below the camera’s visibility. Rather than calculating den-
advantage of the much larger viewshed. sity as an estimate of animals per volume (as opposed to
When cameras are deployed representatively using the animals per area), these animals may be considered
principles of sampling theory, the proportion of area unavailable for detection. Unavailable animals, if
162 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A. K. Moeller et al. Camera-Based Abundance Estimation
ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London. 163
20563485, 2023, 1, Downloaded from https://zslpublications.onlinelibrary.wiley.com/doi/10.1002/rse2.300 by Sri Lanka National Access, Wiley Online Library on [31/03/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Camera-Based Abundance Estimation A. K. Moeller et al.
conceptual framework to identify and correct for biases in Moll, R.J., Ortiz-Calo, W., Cepek, J.D., Lorch, P.D., Dennis,
detection probability of camera traps enabling multi-species P.M., Robison, T. et al. (2020) The effect of camera-trap
comparison. Ecology and Evolution, 9, 2320–2336. viewshed obstruction on wildlife detection: implications for
Hofmeester, T.R., Rowcliffe, J.M. & Jansen, P.A. (2017) A inference. Wildlife Research, 47, 158–165.
simple method for estimating the effective detection distance Nakashima, Y., Fukasawa, K. & Samejima, H. (2018)
of camera traps. Remote Sensing in Ecology and Conservation, Estimating animal density without individual recognition
3, 81–89. using information derivable exclusively from camera traps.
Howe, E.J., Buckland, S.T., Després-Einspenner, M.-L. & Kühl, Journal of Applied Ecology, 55, 735–744.
H.S. (2017) Distance sampling with camera traps. Methods Nichols, J.D. & Williams, B.K. (2006) Monitoring for
in Ecology and Evolution, 8, 1558–1565. conservation. Trends in Ecology and Evolution, 21, 668–673.
Idaho Department of Fish and Game. (2018) Protocol for O’Connell, A.F., Nichols, J.D. & Karanth, K.U. (Eds.). (2011)
statewide ungulate camera deployments. Boise, ID: IDFG. Camera traps in animal ecology: methods and analyses.
Jacobs, C.E. & Ausband, D.E. (2018) An evaluation of camera Tokyo, Japan: Springer.
trap performance - what are we missing and does Rowcliffe, J.M., Carbone, C., Jansen, P.A., Kays, R. &
deployment height matter? Remote Sensing in Ecology and Kranstauber, B. (2011) Quantifying the sensitivity of camera
Conservation, 4, 352–360. traps: an adapted distance sampling approach. Methods in
Karanth, K.U. (1995) Estimating tiger Panthera tigris Ecology and Evolution, 2, 464–476.
populations from camera-trap data using capture-recapture Rowcliffe, J.M., Field, J., Turvey, S.T. & Carbone, C. (2008)
models. Biological Conservation, 71, 333–338. Estimating animal density using camera traps without the
Karanth, K.U. & Nichols, J.D. (1998) Estimation of tiger need for individual recognition. Journal of Applied Ecology,
densities in India using photographic captures and 45, 1228–1236.
recaptures. Ecology, 79, 2852–2862. Santini, G., Abolaffio, M., Ossi, F., Franzetti, B., Cagnacci, F.
Loonam, K.E., Ausband, D.E., Lukacs, P.M., Mitchell, M.S. & & Focardi, S. (2022) Population assessment without
Robinson, H.S. (2021) Estimating abundance of an individual identification using camera-traps: a comparison
unmarked, low-density species using cameras. Journal of of four methods. Basic and Applied Ecology, 61, 68–81.
Wildlife Management, 85, 87–96. Silver, S.C., Ostro, L.E.T., Marsh, L.K., Maffei, L., Noss, A.J.,
Mackie, R.J., Pac, D.F., Hamlin, K.L. & Dusek, G.L. (1998) Kelly, M.J. et al. (2004) The use of camera traps for
Ecology and management of mule deer and white-tailed deer estimating jaguar Panthera onca abundance and density
in Montana. Helena, Montana: Montana Fish, Wildlife and using capture / recapture analysis. Oryx, 38, 148–154.
Parks. Steenweg, R., Hebblewhite, M., Kays, R., Ahumada, J.A., Fisher,
Manzo, E., Bartolommei, P., Rowcliffe, J.M. & Cozzolino, R. J.T., Burton, A.C. et al. (2017) Scaling up camera traps:
(2012) Estimation of population density of European pine monitoring the planet’s biodiversity with networks of remote
marten in Central Italy using camera trapping. Acta sensors. Frontiers in Ecology and the Environment, 15, 26–34.
Theriologica, 57, 165–172. Suwanrat, S., Ngoprasert, D., Sutherland, C., Suwanwaree, P.
McIntyre, T., Majelantle, T.L., Slip, D.J. & Harcourt, R.G. & Savini, T. (2015) Estimating density of secretive terrestrial
(2020) Quantifying imperfect camera-trap detection birds (Siamese Fireback) in pristine and degraded forest
probabilities: implications for density modelling. Wildlife using camera traps and distance sampling. Global Ecology
Research, 47, 177–185. and Conservation, 3, 596–606.
Moeller, A.K. & Lukacs, P.M. (2021) spaceNtime: an R TrailcamPro. (2021) Trail camera detection & field of view
package for estimating abundance of unmarked animals angle. Available from: https://www.trailcampro.com/pages/
using camera-trap photographs. Mammalian Biology. trail-camera-detection-field-of-view-angle
Available from: https://doi.org/10.1007/s42991-021-00181-8 Welbourne, D.J., Claridge, A.W., Paull, D.J. & Lambert, A. (2016)
Moeller, A.K., Lukacs, P.M. & Horne, J.S. (2018) Three novel How do passive infrared triggered camera traps operate and
methods to estimate abundance of unmarked animals using why does it matter? Breaking down common misconceptions.
remote cameras. Ecosphere, 9, e02331. Remote Sensing in Ecology and Conservation, 2, 77–83.
164 ª 2022 The Authors. Remote Sensing in Ecology and Conservation published by John Wiley & Sons Ltd on behalf of Zoological Society of London.