Section: Animal Science
Topic: Agricultural sciences

Goats who stare at video screens – assessing behavioural responses of goats towards images of familiar and unfamiliar con- and heterospecifics

Corresponding author(s): Deutsch, Jana (deutsch@fbn-dummerstorf.de); Nawroth, Christian (nawroth@fbn-dummerstorf.de)

10.24072/pcjournal.473 - Peer Community Journal, Volume 4 (2024), article no. e94.

Get full text PDF Peer reviewed and recommended by PCI
article image

Abstract

Many cognitive paradigms rely on active decision-making, creating participation biases (e.g. subjects may lack motivation to participate in the training) and once-learned contingencies may bias the outcomes of subsequent similar tests. We here present a looking time approach to study goat perception and cognition, without the need to extensively train animals and no reliance on learned contingencies. In our looking time paradigm, we assessed the attention of 10 female dwarf goats (Capra hircus) towards 2D visual stimuli which were images of familiar and unfamiliar con- and heterospecifics (i.e. goats and humans) using an experimental apparatus containing two video screens. Spontaneous behavioural reactions to the presented stimuli, including the looking behaviour and the time spent with the ears in different positions were analysed using linear mixed-effects models. We found that goats looked longer at the video screen presenting a stimulus compared to the screen that remained white. Goats looked longer at images depicting other goats compared to humans, while their looking behaviour did not significantly differ when being confronted with familiar vs. unfamiliar individuals. We did not find statistical support for an association between the ear positions and the presented stimuli. Our findings indicate that goats are capable of discriminating between two-dimensional con- and heterospecific faces, but also raise questions on their ability to categorise other individuals regarding their familiarity using 2D face images alone. Our subjects might either lack this ability or might be unable to spontaneously recognise the provided 2D images as representations of real-life subjects. Alternatively, subjects might have shown an equal amount of motivation to pay close attention to both familiar and unfamiliar faces masking potential effects. The looking time paradigm developed in this study appears to be a promising approach to investigate a variety of other research questions linked to how domestic ungulate species perceive their physical and social environment.

Metadata
Published online:
DOI: 10.24072/pcjournal.473
Type: Research article
Mots-clés : looking time, recognition, visual preference, ear position

Deutsch, Jana 1, 2; Lebing, Steve 3; Eggert, Anja 1; Nawroth, Christian 1

1 Research Institute for Farm Animal Biology, Dummerstorf, Germany
2 University of Rostock, Faculty of Mathematics and Natural Sciences, Institute of Biosciences, Germany
3 University of Rostock, Faculty of Agricultural and Environmental Sciences, Behavioural Sciences, Germany
License: CC-BY 4.0
Copyrights: The authors retain unrestricted copyrights and publishing rights
@article{10_24072_pcjournal_473,
     author = {Deutsch, Jana and Lebing, Steve and Eggert, Anja and Nawroth, Christian},
     title = {Goats who stare at video screens {\textendash} assessing behavioural responses of goats towards images of familiar and unfamiliar con- and heterospecifics},
     journal = {Peer Community Journal},
     eid = {e94},
     publisher = {Peer Community In},
     volume = {4},
     year = {2024},
     doi = {10.24072/pcjournal.473},
     language = {en},
     url = {https://peercommunityjournal.org/articles/10.24072/pcjournal.473/}
}
TY  - JOUR
AU  - Deutsch, Jana
AU  - Lebing, Steve
AU  - Eggert, Anja
AU  - Nawroth, Christian
TI  - Goats who stare at video screens – assessing behavioural responses of goats towards images of familiar and unfamiliar con- and heterospecifics
JO  - Peer Community Journal
PY  - 2024
VL  - 4
PB  - Peer Community In
UR  - https://peercommunityjournal.org/articles/10.24072/pcjournal.473/
DO  - 10.24072/pcjournal.473
LA  - en
ID  - 10_24072_pcjournal_473
ER  - 
%0 Journal Article
%A Deutsch, Jana
%A Lebing, Steve
%A Eggert, Anja
%A Nawroth, Christian
%T Goats who stare at video screens – assessing behavioural responses of goats towards images of familiar and unfamiliar con- and heterospecifics
%J Peer Community Journal
%D 2024
%V 4
%I Peer Community In
%U https://peercommunityjournal.org/articles/10.24072/pcjournal.473/
%R 10.24072/pcjournal.473
%G en
%F 10_24072_pcjournal_473
Deutsch, Jana; Lebing, Steve; Eggert, Anja; Nawroth, Christian. Goats who stare at video screens – assessing behavioural responses of goats towards images of familiar and unfamiliar con- and heterospecifics. Peer Community Journal, Volume 4 (2024), article  no. e94. doi : 10.24072/pcjournal.473. https://peercommunityjournal.org/articles/10.24072/pcjournal.473/

PCI peer reviews and recommendation, and links to data, scripts, code and supplementary information: 10.24072/pci.animsci.100267

Conflict of interest of the recommender and peer reviewers:
The recommender in charge of the evaluation of the article and the reviewers declared that they have no conflict of interest (as defined in the code of conduct of PCI) with the authors or with the content of the article.

Full text

The full text below may contain a few conversion errors compared to the version of record of the published article.

Introduction

Many cognitive paradigms rely on active decision-making, often combined with extended training periods in which subjects learn to respond to arbitrary stimuli. As a result, these paradigms can create participation biases (e.g. subjects may lack motivation to participate in the training) and once-learned contingencies may bias the outcomes of subsequent similar tests (Harlow, 1949; Rivas-Blanco et al., 2023). In particular, some species, such as prey animals, might show a hampered motivation to engage in decision-making tasks due to an increased alert behaviour in a test situation where individuals are typically isolated from the rest of the group for a short period of time. Active decision-making tasks may therefore be inappropriate in some specific contexts if the goal is to test for the population-wide distribution of cognitive traits in a species or to make adequate cross-species comparisons.

Looking time paradigms (Winters et al., 2015; experimental setups in which visual stimuli are presented to a subject and its corresponding visual attention to each stimulus is measured, see Wilson et al., 2023) were originally developed for research on the perception of preverbal human infants (Berlyne, 1958; Fantz, 1958) and have since been increasingly used in animal behaviour and cognition research, especially in non-human primates (e.g. Krupenye et al., 2016; Leinwand et al., 2022). One prominent experimental approach of the looking time paradigm, next to habituation- and violation-of-expectation-tasks, is the visual preference task (for a critical discussion of the term ‘visual preference’ see Winters et al., 2015). In this experimental setup, visual stimuli are presented either simultaneously or sequentially and a subject’s preference for a particular stimulus is assessed by measuring its visual attention to each stimulus (Steckenfinger & Ghazanfar, 2009; Racca et al., 2010; Méary et al., 2014; Leinwand et al., 2022). One of the main assumptions of the visual preference task is that animals direct their visual attention for longer to objects or scenes that are perceived to be more salient to them, or that elicit more interest (Winters et al., 2015). An increased interest in specific stimuli can have multiple reasons, such as the perception of increased attractiveness or threat, novelty or familiarity (Wilson et al., 2023). However, the underlying motivation to show increased interest in a stimulus is often difficult to assess, as multiple motivational factors can simultaneously occur (for a critical discussion of the interpretation of the looking behaviour see Wilson et al., 2023). Visual preference tasks do not require intensive training of learned responses, are relatively fast to perform and provide a more naturalistic setup compared to many decision-making tasks (Racca et al., 2010; Wilson et al., 2023). Looking time paradigms might be particularly valuable for assessing socio-cognitive capacities such as individual discrimination and recognition, as social stimuli often have a higher biological relevance compared to artificial and/or non-social stimuli and may therefore elicit a stronger behavioural response.

Individual recognition refers to a subset of recognition that occurs when one organism identifies another according to its unique distinctive characteristics (Tibbetts & Dale, 2007). This process may be important in an animal’s social life as an animal that recognises another individual, thus also recognises the sex and social status of a familiar group member, an unfamiliar out-group conspecific or even the heterospecific status of other animal species (Coulon et al., 2009). To achieve visual individual recognition, many animal species rely on the process of face recognition (e.g. paper wasps (Polistes fuscatus): Tibbetts, 2002; cichlid fish (Neolamprologus pulcher): Kohda et al., 2015; cattle (Bos taurus): Coulon et al., 2009; sheep (Ovies aries): Kendrick et al., 2001).

In social situations in which fast decision-making is required, it may be advantageous to use social categories rather than relying on individual features. These categories are established through social recognition, defined as the capability of individuals to categorise other individuals into different classes, e.g. familiar vs. unfamiliar, kin vs. non-kin, or dominant vs. subordinate (Gheusi et al., 1994). Categorising individuals can simplify decision-making in complex social environments by reducing the information load (Zayan & Vauclair, 1998; Ghirlanda & Enquist, 2003; Lombardi, 2008; Langbein et al., 2023). Therefore, social recognition might be considered a cognitive shortcut for decision-making. The capability to differentiate between other individuals in two-dimensional images based on social recognition has been shown in several non-human animals (e.g. great apes: Leinwand et al., 2022; capuchin monkeys (Cebus apella): Pokorny & de Waal, 2009; horses (Equus caballus): Lansade et al., 2020; cattle: Coulon et al., 2011; sheep: Peirce et al., 2001).

Like many ungulate species, goats are highly vigilant prey animals that rely strongly on their visual sense and auditory sense to detect predators (Adamczyk et al., 2015). As feral goats live in groups with a distinct hierarchy (Shank, 1972), it is likely that they can tell familiar and unfamiliar conspecifics apart (Keil et al., 2012). Goats also show sophisticated social skills, e.g. the ability to follow the gaze direction of a conspecific (Kaminski et al., 2005; Schaffer et al., 2020). It can be assumed that paying attention to conspecific head cues may play an important role in a goat’s social life as they use head movements to indicate their rank in the hierarchy (Shank, 1972). Goats have also been shown to attribute attention to humans (Nawroth et al., 2015), follow their gaze (Schaffer et al., 2020) and prefer to approach images of smiling humans over images of angry humans (Nawroth & McElligott, 2017), indicating high attention to human facial features. These characteristics make them an ideal candidate species for answering questions regarding their socio-cognitive capacities using looking time paradigms.

In this study, we tested whether a looking time paradigm can be used in dwarf goats to answer biological questions, in this case whether they are capable of spontaneously recognising familiar and unfamiliar con- and heterospecific faces when being presented as two-dimensional images. To do this, we presented the subjects with a visual preference task in which the visual stimuli were presented sequentially and analysed the looking behaviour towards each stimulus. We hypothesised that goats attribute their visual attention to suddenly appearing objects in their environment (H1). We therefore predicted that our subjects would pay more attention (i.e. higher looking durations) to a video screen presenting a stimulus compared to a white screen (P1). Moreover, we hypothesised that goats show different behavioural responses to two-dimensional images of conspecific compared to images of heterospecific faces, irrespective of familiarity (H2). The preference for looking at conspecifics compared to heterospecifics has been shown in primates (Fujita, 1987; Demaria & Thierry, 1988; but see Tanaka, 2007 for an effect in the opposite direction; Kano & Call, 2014). Sheep, a ruminant species closely related to goats, also preferred conspecific compared to human images when faced with a discrimination task in an enclosed Y-maze (Kendrick et al., 1995). We therefore predicted that the goats in our study would pay more attention (i.e. higher looking durations) to conspecific compared to heterospecific faces, showing a visual preference for conspecific stimuli (P2). We also hypothesised that goats are able to spontaneously recognise familiar and unfamiliar con- and heterospecifics when being presented with their faces as two-dimensional images (H3). The capability to differentiate between familiar and unfamiliar individuals has been demonstrated in several domestic animal species, e.g. llamas (Lama glama) (Taylor & Davis, 1996; real humans as stimuli), horses (Lansade et al., 2020; photographs of human faces), cattle (Coulon et al., 2011; photographs of cattle faces) and sheep (Peirce et al., 2000, photographs of sheep faces; Peirce et al., 2001, photographs of human faces). Therefore, we predicted that the subjects in our study would show differential looking behaviour depending on the familiarity of the presented individuals. In particular, we expected that goats would show a visual preference (i.e. higher looking durations) for unfamiliar compared to familiar heterospecific stimuli (see Leinwand et al., 2022; Thieltges et al., 2011 for this preference in great apes and dolphins (Tursiops truncatus)), and for familiar compared to unfamiliar conspecific stimuli (see Coulon et al., 2011 for this preference in cattle), resulting in a statistical interaction between the species displayed in the stimuli and the displayed individual’s familiarity to our study subjects (P3). We also explored goats’ ear position (forward, backward, horizontal, others) during stimulus presentation as ear position has been speculated as being an indicator for differences in arousal and/or valence in goats (Briefer et al., 2015; Bellegarde et al., 2017).

Animals, Materials and Methods

Ethical note

The study was waived by the State Agency for Agriculture, Food Safety and Fisheries of Mecklenburg-Vorpommern (Process #7221.3-18196_22-2) as it was not considered an animal experiment in terms of sect. 7, para. 2 Animal Welfare Act. Animal care and all experimental procedures were in accordance with the ASAB/ABS guidelines for the use of animals in research (ASAB Ethical Committee/ ABS Animal Care Committee, 2023). All measurements were non-invasive and the experiment did not last longer than ten minutes per day for each individual goat. If the goats had shown signs of a high stress level, the test would have been stopped.

Subjects and Housing

Two groups of non-lactating female, one to two years old, Nigerian dwarf goats (group A: 6 subjects, mean age ± SD: 688.2 ± 5.2 d at the start of testing; group B: 6 subjects, 472.2 ± 1.2 d at the start of testing) reared at the Research Institute for Farm Animal Biology (FBN) in Dummerstorf participated in the experiment. The animals had previously participated in an experiment with an automated learning device (Langbein et al., 2023) at an earlier age (groups A and B) and in an experiment on prosocial behaviour in goats (unpublished data; group A). Each group was housed in an approximately 15 m2 (4.8 m x 3.1 m) pen consisting of a deep-bedded straw area (3.1 m x 3.1 m) and a 0.5 m elevated feeding area (3.1 m x 1.5 m). Each pen was equipped with a hay rack, a round feeder, an automatic drinker, a licking stone, and a wooden podium for climbing. Hay and food concentrate were provided twice a day at 7 am and 1 pm, while water was offered ad libitum. Subjects were not food-restricted during the experiments.

Experimental arena and apparatus

The experimental arena was located next to the two home pens. It consisted of three adjoining rooms with 2.1 m high wooden walls connected by doors (Fig. 1). Data collection took place in a testing area (4.5 m x 2 m) divided into two parts (2.25 m x 2 m) by a fence that facilitated the separation of single subjects from the rest of the group. The experimental apparatus was inserted into the wall between the testing area and the experimenter booth (2 m x 1.5 m), which was located behind the apparatus and where an experimenter (E1) was positioned during all sessions. The subject in the testing area had no visual contact with E1. Between the different sessions of data collection subjects remained in an adjacent waiting area (6 m x 2.2 m).

Figure 1 - Scheme of the experimental arena, including the testing area, the experimenter booth, the waiting area and the experimental apparatus.

The experimental apparatus (Fig. 2) was inserted into the wall between the testing area and the experimenter booth at a height of 36 cm above the floor and consisted of two video screens (0.55 m x 0.33 m) mounted on the rear wall of the apparatus. The video screens were positioned laterally so that they were angular (around 45°) to a subject standing in front of the apparatus. Subjects standing in front of the apparatus were considered to look approximately at the centre of the screens. Two digital cameras were installed: one (AXIS M1135, Axis Communications, Lund, Sweden) on the ceiling provided a top view of the subject, and one (AXIS M1124, Axis Communications, Lund, Sweden) on the wall separating the two video screens provided a frontal view of the subject. Videos were recorded with a 30 FPS rate. A food bowl, connected to the experimenter booth by a tube, was inserted into the bottom of the apparatus. This allowed E1 to deliver food items without being in visual contact with the tested subject.

Figure 2 - Experimental apparatus with video screens (VS), cameras (C), food bowl (B) and tube (T).

Habituation

The experiment required the handling of the animals by the experimenters (E1 and E2). To this end, they entered the pen, talked to the animals, provided food items (uncooked pasta), and, if possible, touched them. The experimenters stayed in the pen for approximately 30 minutes daily for twelve days (group A) and eleven days (group B) until each of the animals remained calm when the experimenters entered the pen and could be hand-fed.

After this home pen habituation period, the animals were introduced as groups to the experimental arena for approximately 15 minutes per day. On the first two days of this habituation phase, the subjects were allowed to move freely between the waiting area and the testing area, and food was provided in the whole arena. On the third day, the doors between the two areas were temporarily closed and food was provided only at the experimental apparatus with E1 sitting in the experimenter booth and inserting food through the tube into the food bowl, while E2 remained with the animals in the testing area. The video screens of the experimental apparatus were turned off on the first two days of the habituation phase and then turned on only showing white screens. Group habituation lasted for ten sessions for both groups. After these ten sessions, all animals remained calm in the experimental arena, fed out of the food bowl in the experimental apparatus, and were thus transferred to the next habituation phase.

In the next habituation phase, all goats were transferred to the experimental arena but only two subjects were introduced to the testing area while the other four group members remained in the waiting area to maintain acoustic and olfactory contact. Each pair was provided with 20 food items over a period of 5 min via the tube connecting the food bowl in the apparatus and the experimenter booth. Subjects were immediately reunited with the rest of the group after the separation. Optimal subject groupings were identified over time, as some subjects showed signs of stress when separated in the pair setting. This habituation phase took ten sessions for group A and 14 sessions for group B. After this phase, all animals remained calm in the pair setting, fed out of the food bowl in the experimental apparatus, and were thus transferred to the next habituation phase.

Finally, subjects were habituated alone for approximately 3 min per day, using the same procedure as for the pair habituation, except that only 10 food items were provided via the tube connecting the food bowl in the apparatus and the experimenter booth. This habituation phase took 5 sessions for both groups. Two subjects showed signs of a high stress level (e.g. loud vocalisations, restless wandering, and rejection of feed uptake) during the habituation and were therefore excluded from the experiment. The remaining ten subjects that stayed calm in the testing area and fed out of the food bowl proceeded to the experimental phase during which one subject needed to be excluded at a later stage as it began to show indicators of high stress.

Experimental procedure

Stimuli and stimulus presentation

In this experiment, photographs of human and goat faces were used as stimuli. A professional photographer took pictures of the individual goats from both groups and also of four humans, two being familiar to the goats (E1 and E2) and two being unfamiliar to the goats. Familiar humans had almost daily positive interactions with the animals (feeding them with dry pasta, if possible touching and gently stroking them) during the habituation phase over at least three months (once a day, five days per week). Familiar and unfamiliar humans were matched for sex (one female, one male each). Each face was photographed in two slightly different orientations: the human faces were rotated slightly to the left and right, and the goat faces were photographed in two different head orientations, provided that both eyes were visible (Fig. 3). This was done to increase the variability of the provided stimuli. Additionally, each picture was tested for its brightness (ImageJ 1.53m, Wayne Rasband and contributors, National Institute of Health, USA, https://imagej.net, Java 1.8.0-internal (32-bit)) and its size (Corel® Photo-Paint X7 (17.1.0.572), © 2014 Corel Corporation, Ottawa, Canada). No difference was found between the goat faces and the human faces with respect to brightness (goats: 231.66 ± 6.1 (mean ± SD), humans: 225.91 ± 6.44), but the two stimulus categories varied regarding size (goats: 46092.06 ± 2655.86 px (mean ± SD), humans: 59317.5 ± 2260.65 px). The stimuli were presented as approximately life-sized, in colour, and with a white background. Images were presented either on the left or on the right screen while the other screen remained white. Each test session consisted of a stimulus set of five slides. An initial white slide started the set followed by four slides with a stimulus on either the left or the right side. Four stimulus sets showed human faces and 16 stimulus sets showed goat faces. Each of these sets contained pictures of two familiar and two unfamiliar goats/humans with each goat/human presented only once. The human images were the same for all subjects, while the goat images varied as an individual goat was not allowed to see its own picture as a stimulus. The stimuli were presented on the video screens in a pseudorandomized and counterbalanced order.

Data collection

Data collection took place in May and June 2022. Testing started at 9:00 a.m. each day, and each subject completed eight sessions (4 consecutive sessions with goat stimuli, and 4 consecutive sessions with human stimuli with a switch of stimulus species between session 4 and 5) with one session per day. Group A was presented with the goat faces first, group B with the human faces. A session started when the subject was separated from the rest of the group and stood in front of the experimental apparatus. Prior to the stimulus presentation, one to two motivational trials were conducted in which a food item was inserted into the apparatus without any stimulus being presented for 10 seconds afterwards. Immediately before each stimulus presentation, a food item was inserted into the food bowl. The stimulus presentation lasted for 10 seconds. A test trial was followed by another motivational trial so that motivational trials and test trials alternated until all four stimuli of a set had been presented. The number of motivational trials varied depending on the behaviour of the subject and could be increased, e.g. if the animal was restless at the beginning of the session. Data from the subject that needed to be excluded after the fifth test session remained in the data set.

Figure 3 - Examples of the faces used as stimuli (A) familiar human and (B) goat (familiarity depended on the subject tested).

Data scoring and analysis

Video coding

The behaviour of the individual goats was scored using Boris (Friard & Gamba, 2016, Version 7.13), an event logging software for video coding and live observations. For the video coding of the looking behaviour, the recordings from the camera providing a top view of the subject were used. Coding was performed in frame-by-frame mode and the researchers remained blind to the stimulus presentation by covering the video screens of the apparatus during coding. The first look was scored when the subject directed its gaze towards a video screen for the first time in a trial once the head was lifted from the food bowl. Besides the direction of the first look, the looking duration at each video screen was scored. To determine the direction in which the subject was looking, a fictitious line that extends from the middle of the snout (orthogonal to the line connecting both eyes) was drawn (Fig. 4). As this line would align with a binocular focus of the tested subject, it was used as an indicator of a goat directing its attention to a particular screen. The goat’s looking behaviour was not scored when the subject was not facing the wall of the testing area in which the apparatus was inserted because then it could not be ensured that it was actually paying attention to the presented stimulus. Video elements in which the goat’s face was not visible due to occlusion (e.g. when the subject was sniffing a video screen after moving into the apparatus with both forelegs) were not scored. There was no scoring when the subject’s snout was above its eye level because in this case it was assumed that it was looking at the ceiling of the apparatus and not at the video screens or the wall separating the two video screens. There was also no scoring when the subject’s snout was perpendicular to the bottom of the apparatus, as in this case it was assumed that the subject was sniffing the bottom of the apparatus with its sight also directed towards it rather than towards the video screens. Inter-observer reliability for the looking duration towards S+ was assessed in a previous stimulus presentation study using the same coding rules and was found to be very high (80 out of 200 trials (40%) of the videos were coded by two observers; Pearson correlation coefficient (r) = 0.96; p < 0.001).

Figure 4 - Image of the camera providing a top view of the apparatus during the stimulus presentation. Video screens were covered during the video coding to reduce potential biases during video coding. A fictitious line extending from the middle of the snout (red) was used in the blind coding for deciding which video screen the subject was looking at.

For the video coding of the ear positions during the stimulus presentation, which was also performed in frame-by-frame mode, recordings from the camera providing a frontal view of the subject were used. We scored four different ear positions (see Boissy et al., 2011; Briefer et al., 2015 for related scoring in goats and sheep): ears oriented forward (tips of both ears pointing forward), backward (tips of both ears pointing backward), horizontal (ear tips perpendicular to the head-rump-axis) and other postures (all ear positions not including the positions mentioned above, i.e. asymmetrical ears or the change between two ear positions). The ear positions were analysed for the entire ten seconds of stimulus presentation, regardless of whether the subjects were looking at the video screens. Video elements in which not both ears (or at least parts of both ears that allowed a precise determination of the ear positions) were visible, were not scored. There was no scoring when the ear position could not be clearly determined, i.e. unclear ear tip positions when the subject was standing further away, even though both ears were visible. Inter-observer reliability for the duration of ears in the respective positions was found to be high (32 out of 305 trials (11%) of the videos were coded by two observers; Pearson correlation coefficient (r) = 0.85; p < 0.001).

Statistical analysis

Statistical analysis was carried out in R (R Core Team, 2022, Version 4.2.2).
To assess whether subjects looked longer at one of the video screens, the mean looking duration at the video screen presenting a stimulus (S+) and the video screen without a stimulus (S-) for each subject were compared using a Wilcoxon signed-rank test (as data points were not normally distributed). Subsequently, it was analysed how often the first look (FL) was directed towards S+ or S- and the probability of the FL being directed towards S+ compared to S- was calculated (p). Additionally, the odds, representing how much more frequently the FL was directed towards the stimulus than towards the white display, were calculated as follows:

p / (1 – p)

Furthermore, four linear mixed-effects models (R package “blme”; Chung et al., 2013) were set up. The four respective response variables were “looking duration at S+” (out of the total of 10s of stimulus presentation), “Forward_Ratio” (time ears oriented forward divided by the summed-up durations of all four ear positions), “Backward_Ratio” (time ears oriented backward divided by the summed-up durations of all four ear positions) and “Horizontal_Ratio” (time ears oriented horizontal divided by the summed-up durations of all four ear positions).

For all models, we checked the residuals of the models graphically for normal distribution and homoscedasticity (R package “performance”; Lüdecke et al., 2021). To meet model assumptions, “looking duration at S+” was log-transformed and the trials in which “looking duration at S+” had a value of zero (n=17) were excluded as this was an indication that subjects might have been distracted. All models included “Stimulus species” (two levels: human, goat), “Stimulus familiarity” (two levels: familiar, unfamiliar) and “Testing order” (two levels: first human stimuli, first goat stimuli) as fixed effects. We also tested for an interaction effect including “Stimulus species” and “Stimulus familiarity”. Repeated measurements “Session” (1-8) per “Subject” (identity of the goat) were defined as nested effects. We followed a full model approach, i.e., we set up a maximum model that we present and interpret (Forstmeier & Schielzeth, 2011). First, we calculated the global p-value (between the maximum and null model) using parametric bootstraps (1,000 bootstrap samples, R package “pbkrtest”; Halekoh & Højsgaard, 2014). If that model reached a low p-value, we tested each of the predictor variables (including the interaction) singly by comparing the full model to the one omitting this predictor. P-values calculated with parametric bootstrap tests give the fraction of simulated likelihood ratio test (LRT) statistical values that are larger or equal to the observed LRT value. This test is more adequate than the raw LRT because it does not rely on large-sample asymptotic analysis and correctly takes the random-effects structure into account (Halekoh & Højsgaard, 2014). Moreover, it was tested whether there was an increase in the looking duration towards S+ between session 4 and session 5, due to a dishabituation effect in the subjects caused by the switch of the presented stimulus species. To achieve this, the mean looking durations towards S+ in both sessions were calculated for each subject and then compared by performing a paired t-test. Type 1 error rate was controlled at a level of p = 0.05 for all tests.

Results

Preference for S+ over S- regarding looking duration

With their mean duration, subjects looked significantly longer at S+ (2.27 ± 1.03 s; median ± IQR) compared to S- (0.56 ± 0.4 s; Wilcoxon signed-rank test: V = 53; p = 0.006; Fig. 5).

Figure 5 - Boxplots showing the mean looking durations at the video screen without a stimulus (S-) and the video screen presenting a stimulus (S+) of all subjects across all trials. Lines indicate data points from the same individual.

Preference for S+ over S- regarding first look

In 264 of the 301 trials (86.6%) in which the animals were attentive to the video screens (4 trials were excluded in which the animals neither looked at the left nor the right video screen), the FL was directed towards S+. Therefore, the probability of the FL being directed towards S+ was six times more likely than towards S-.

Factors affecting looking duration at S+

Regarding the looking duration model, we found no substantial interaction effect between the factors “Stimulus species” and “Stimulus familiarity” (p = 0.27). Across all test trials, goats looked longer at goat faces compared to human faces (p = 0.027, Fig. 6). The familiarity of the stimulus subject and the testing order did not substantially affect their looking duration at S+ (both p ≥ 0.48, Fig. 6).

Figure 6 - Small dots represent the looking duration at the video screen presenting a stimulus (S+) across species, familiarity, and testing order. Larger black dots are the corresponding model estimates for each condition, and thin black lines and whiskers are the 95 % confidence intervals of the maximum model (including the main effects and interactions).

Differences in looking duration when stimulus species switched (Session 4 vs. Session 5)

Subjects looked longer at S+ during session 5 (3.28 ± 1.5 s; mean ± SD) compared to session 4 (1.58 ± 0.77 s; paired t-test: t = -1.70; p = 0.014, Fig. 7) when the stimulus species switched from human to goat or vice versa.

Factors affecting ear positions during stimulus presentation

Regarding the ear position, none of the three models revealed a significant interaction effect between “Stimulus species” and “Stimulus familiarity” (all p ≥ 0.32). We found no statistically supported differences in the ratios of the three ear positions for the fixed factors “Stimulus species” (all p ≥ 0.57), “Stimulus familiarity” (all p ≥ 0.44) and “Testing order” (all p ≥ 0.61).

Figure 7 - Boxplots showing the mean looking durations at S+ in sessions 4 and 5 (stimulus switch from human to goat or vice versa) for all subjects. Lines indicate data points from the same individual.

Discussion

In this study, we tested whether a looking time paradigm can be used to answer questions on recognition capacities in dwarf goats, in this case whether goats are capable of recognising familiar and unfamiliar con- and heterospecific faces when being presented as two-dimensional images. To assess visual attention (via looking time) and arousal (via ear positions), we measured the goats’ looking behaviour towards the stimuli and their ear positions during the trial. Our results show that goats differ in their behavioural responses when presented with 2D images of either con- or heterospecifics, showing a visual preference for goat faces. However, their response did not differ between familiar and unfamiliar individuals (irrespective of species), suggesting that goats either cannot spontaneously assign social recognition categories to 2D images or are equally motivated to pay close attention to both categories (but for different reasons). These findings are partly in contrast to related research on goats and other domestic ungulate species (Coulon et al., 2011; Langbein et al., 2023) and thus raise questions about the comparability of test designs.

As predicted (P1), goats paid more attention to a video screen presenting a stimulus (S+) compared to a white screen (S-), supporting our hypothesis that goats attribute their visual attention to suddenly appearing objects in their environment (H1). Additionally, 86.6 % of the first looks were directed towards S+ compared to S-. These results indicate that the subjects were attentive with regard to the stimuli presented and therefore is good evidence that the design of our looking time paradigm is an appropriate experimental setup to address the visual sense of small ungulates.

As predicted (P2), subjects paid more attention to goat compared to human faces, supporting our hypothesis that goats show different behavioural responses to two-dimensional images of conspecific compared to heterospecific faces, irrespective of familiarity (H2). This aligns with Kendrick et al. (1995), who found that sheep preferred conspecifics over humans in a visual discrimination task, and with studies conducted with rhesus macaques (Fujita, 1987; Demaria & Thierry, 1988). There are several possible reasons why the goats in our study paid more visual attention to the conspecific stimuli. One possible explanation might be that conspecific stimuli may generally convey more biologically relevant information, such as the identity, sex, age, status in the hierarchy or even the emotional state of a conspecific. This principle should similarly apply to goats, given their highly social nature, either as an inherent trait or influenced by developmental factors. In our study, limited exposure to humans prior to the study might also have resulted in a bias towards conspecifics. It would therefore be interesting to see whether hand-reared goats would also show a conspecific bias. We cannot fully exclude that participating in other experiments might have influenced the behaviour of our subjects - especially as the subjects from our study participated in an experiment with an automated learning device with photographs being presented on a computer display. However, we never observed that our subjects showed the learned response from this previous experiment (using the video screen as a touchscreen with their snout to indicate a choice regarding a photograph) so that it can be considered less likely that our subjects have transferred their learned responses and associated behaviours to our study. Another possible reason for the observed visual preference for conspecific faces in goats might be that the sight of a conspecific might work as a stress buffer during the isolation in the test trials as has been shown for sheep when being isolated from their social group (Da Costa et al., 2004). Da Costa et al. (2004) tested whether sheep in social isolation would show reduced indications of stress when being presented with an image of a conspecific compared to images of goats or inverted triangles and found that seeing a conspecific face in social isolation significantly reduced behavioural, autonomic and endocrine indices of stress. As feral goats and sheep have comparable social structures it is reasonable to assume that images of conspecifics might likewise have positive effects on the tested subjects in our study. Additional assessment of stress parameters, such as heart rate (variability) or cortisol concentration, is recommended (see e.g. Da Costa et al., 2004).

Alternatively, a possible reason for the shorter looking durations at the human stimuli might be due to avoidance of the human face images, as the presented humans might be perceived as possible predators (Davidson et al., 2014). This might have led to behavioural responses aimed at reducing the time the human images can be observed, e.g. by moving away from the experimental apparatus. In sheep, human eye contact altered behaviour compared to no human eye contact, resulting in more locomotor activity and urination when being stared at, but no differences in fear-related behaviours, such as escape attempts (Beausoleil et al., 2006). This might imply that human eye contact can be interpreted as a warning cue for sheep (Beausoleil et al., 2006). Goats, in our study, might thus have simply avoided the human image (and gaze) rather than showing an active preference for goat images.

Additional support for H2 is provided by the finding that the subjects in our study also looked longer at the stimuli in session 5 compared to session 4 when the presented stimulus species was switched from human to goat or vice versa. This switch corresponds to a habituation-dishabituation paradigm. In this paradigm, a habituation stimulus is presented to the subject either for a long period or over several short periods (habituation period) and is then replaced by a novel stimulus in the dishabituation period (Kavšek & Bornstein, 2010). In habituation-dishabituation paradigms, the subject’s attention to the habituation stimulus is expected to decrease during the habituation period, but then to increase in the dishabituation period when a novel stimulus (that the subject is able to distinguish from the previous one) is presented (Kavšek & Bornstein, 2010). As our study found longer looking durations at the novel stimulus species compared to the old one, it can be assumed that the subjects noticed that the stimuli had changed and were therefore able to discriminate between conspecific and heterospecific stimuli. This additionally supports our primary findings regarding the capability to discriminate between con- and heterospecifics when presented as two-dimensional images.

Contrary to our third prediction (P3), we found no statistical support for differences in the looking behaviour with respect to the familiarity of the depicted individuals. Consequently, we have to reject the hypothesis that goats are able to spontaneously recognise familiar and unfamiliar con- and heterospecifics when being presented with their faces as two-dimensional images (H3). There are several possible reasons, of varying likelihood, that might explain this finding. One possibility is that the subjects were simply not able to differentiate between familiar and unfamiliar individuals because they did not form the concept of familiar or unfamiliar individuals associated with social recognition in general. Alternatively, visual head cues alone might not be sufficient for goats to form these categories. Keil et al. (2012) even found that goats don’t necessarily need to see a conspecific’s head to discriminate between group members and goats from another social group. In contrast to this, results from other ruminants, such as cattle (Coulon et al., 2011) and sheep (Peirce et al., 2000, 2001), have shown that a set of ruminant species have the capability to form this concept using two-dimensional head cues in a visual discrimination task. Langbein et al. (2023) also found some evidence that goats are able to associate two-dimensional representations of conspecifics with real animals in a visual discrimination task. It is therefore surprising to see that the subjects in our study did not show differential looking behaviour with respect to the familiarity of the individuals presented. It might also be possible that subjects were indeed able to differentiate between the categories of stimulus familiarity, but had the same level of motivation (but for different reasons) to pay close attention to both categories, resulting in similar looking durations. The different reasons for looking at either familiar or unfamiliar con- or heterospecifics (e.g. novelty (Fantz, 1964; Tulving & Kroll, 1995), threat perception, individual recognition, positive associations or social buffering (for a more detailed discussion see Rault, 2012)) might therefore have compensated for each other and could, ultimately, have led to the absence of a visual preference for a specific category in this study. This assumption also seems plausible when considering the results of Demaria & Thierry (1988), who presented both images of familiar and unfamiliar conspecifics to stump-tailed macaques. They did not find a difference in the looking durations at both stimulus categories but did observe that when looking at the image of a familiar conspecific, some subjects turned back to look at the social group to which the stimulus macaque belonged to. This pattern was never observed for unfamiliar conspecifics, which might indicate that the subjects did indeed distinguish between familiar and unfamiliar individuals. However, this capability could not be inferred from the looking durations at the images per se as they also showed no preference for any of the categories.

We did not find statistical support for an association between the presented stimulus species or the familiarity of the depicted individuals and the amount of time spent with the ears in a specific position. A higher percentage of the ears in a forward position might be associated with situations that lead to high arousal and/or increased attention in goats (Briefer et al., 2015; Bellegarde et al., 2017). Thus, it seemed probable that the subjects in our study would show a higher percentage of ears in a forward position when being presented with the stimulus species that they looked longer at (here, goat faces). We can only speculate as to why this was not the case in our study. One possibility could be that the “ear forward position”, as well as the “ears backward position”, is not solely associated with the level of arousal or attention in goats, but also with the valence of the situation experienced by the animal (Briefer et al., 2015; Bellegarde et al., 2017). As we cannot safely infer from our looking duration data that subjects actually perceived the two-dimensional images of the stimulus subjects as representations of their real, three-dimensional counterparts, we cannot make good assumptions about the particular levels of valence and arousal that our stimuli might have elicited in our focal subjects, making a comparison problematic. It is also possible that the 2D images presented as stimuli did not evoke arousal strong enough to make the ear position a good behavioural parameter. Therefore, the ear position during stimulus presentation does not seem to be an appropriate parameter for testing the attention of goats in our looking time paradigm.

This study has shown that looking time paradigms can be used to test discrimination abilities and visual preferences in goats, provided that the results are interpreted with caution. Thus, it lays the foundation for the work on related research questions using this methodology. As this study was only partly able to demonstrate social visual preferences in goats, further studies are needed to identify the factors that dominantly direct the attention of goats. Therefore, different social visual stimuli other than solely head cues could be used, e.g. full body images of a con- or heterospecific or even videos. In addition, different sensory modalities could be addressed, e.g. by pairing visual with acoustic or olfactory cues. Such a cross-modal approach could provide subjects with a more holistic, yet highly controlled, representation of other individuals. Assessing the social relationships between the subjects in their home environment, such as dominance rank or the distribution of affiliative interactions, could carry additional information when explaining potential biases or preferences in subjects’ looking duration and should be considered in future studies. Finally, a more diverse study population (larger age range, more than one sex tested, etc.) will help to make more generalizable statements about social visual preferences in goats. Further looking time paradigm studies in goats should not only focus on their behavioural responses to specific stimuli, but should also consider adding the measurement of physiological parameters that indicate stress. For example, measuring the heart rate or heart rate variability (e.g. Langbein et al., 2004) or the concentration of cortisol (Da Costa et al., 2004) could help to obtain a more comprehensive picture of how goats perceive specific 2D stimuli. In terms of technical advances, eye-tracking could also be considered to provide more accurate estimates of visual attention in focal subjects (Shepherd & Platt, 2008; Völter & Huber, 2021; e.g. Gao et al., 2022). In the future, this looking time approach could be also used to assess the interplay between cognition and emotions, e.g. to assess attention biases associated with the affective state of an animal (Crump et al., 2018). Given that appropriate stimuli can be identified, an automatised looking time paradigm would offer an efficient approach to assess husbandry conditions, not only experimentally, but also on-farm.

Conclusion

The looking time paradigm presented here appears to be generally suitable for testing visual preferences in dwarf goats, while assessing the concept of familiarity may require better controls for confounding factors to disentangle the different motivational factors associated with the presented stimuli. Goats showed a visual preference for conspecifics when discriminating between two-dimensional images of goats and humans. This is consistent with previous findings in macaques (Fujita, 1987; Demaria & Thierry, 1988) and sheep (Kendrick et al., 1995). In contrast to previous research in a variety of species (e.g. great apes: Leinwand et al., 2022; capuchin monkeys: Pokorny & de Waal, 2009; cattle: Coulon et al., 2011; horses: Lansade et al., 2020; sheep: Peirce et al., 2001), we found no attentional differences when goats were presented with two-dimensional images of familiar and unfamiliar individuals which calls into question the comparability of results obtained with different experimental designs.

Acknowledgements

We would like to thank the staff of the Experimental Animal Facility Pygmy Goat at the Research Institute for Farm Animal Biology in Dummerstorf, Germany, for taking care of the animals. Special thanks go to Michael Seehaus for technical support. Preprint version 4 of this article has been peer-reviewed and recommended by Peer Community In Animal Science (https://doi.org/10.24072/pci.animsci.100267; Veissier, 2024).

Funding

The authors declare that they have received no specific funding for this study.

Conflict of interest disclosure

Christian Nawroth is recommender of PCI Animal Science. The authors declare that they comply with the PCI rule of having no financial conflicts of interest in relation to the content of the article.

Data, scripts, code, and supplementary information availability

Data, scripts and code are available online (https://doi.org/10.17605/OSF.IO/NEPWU, Deutsch et al., 2023).


References

[1] Adamczyk, K.; Górecka-Bruzda, A.; Nowicki, J.; Gumułka, M.; Molik, E.; Schwarz, T.; Earley, B.; Klocek, C. Perception of environment in farm animals – A review, Annals of Animal Science, Volume 15 (2015) no. 3, pp. 565-589 | DOI

[2] Beausoleil, N. J.; Stafford, K. J.; Mellor, D. J. Does direct human eye contact function as a warning cue for domestic sheep (Ovis aries)?, Journal of Comparative Psychology, Volume 120 (2006) no. 3, pp. 269-279 | DOI

[3] Bellegarde, L. G.; Haskell, M. J.; Duvaux-Ponter, C.; Weiss, A.; Boissy, A.; Erhard, H. W. Face-based perception of emotions in dairy goats, Applied Animal Behaviour Science, Volume 193 (2017), pp. 51-59 | DOI

[4] Berlyne, D. E. The influence of the albedo and complexity of stimuli on visual fixation in the human infant, British Journal of Psychology, Volume 49 (1958) no. 4, pp. 315-318 | DOI

[5] Boissy, A.; Aubert, A.; Désiré, L.; Greiveldinger, L.; Delval, E.; Veissier, I. Cognitive sciences to relate ear postures to emotions in sheep, Animal Welfare, Volume 20 (2011) no. 1, pp. 47-56 | DOI

[6] Briefer, E. F.; Tettamanti, F.; McElligott, A. G. Emotions in goats: mapping physiological, behavioural and vocal profiles, Animal Behaviour, Volume 99 (2015), pp. 131-143 | DOI

[7] Chung, Y.; Rabe-Hesketh, S.; Dorie, V.; Gelman, A.; Liu, J. A Nondegenerate Penalized Likelihood Estimator for Variance Parameters in Multilevel Models, Psychometrika, Volume 78 (2013) no. 4, pp. 685-709 | DOI

[8] ASAB Ethical Committee/ ABS Animal Care Committee Guidelines for the ethical treatment of nonhuman animals in behavioural research and teaching, Animal Behaviour, Volume 195 (2023), p. I-XI | DOI

[9] Coulon, M.; Baudoin, C.; Heyman, Y.; Deputte, B. L. Cattle discriminate between familiar and unfamiliar conspecifics by using only head visual cues, Animal Cognition, Volume 14 (2011), pp. 279-290 | DOI

[10] Coulon, M.; Deputte, B. L.; Heyman, Y.; Baudoin, C. Individual Recognition in Domestic Cattle (Bos taurus): Evidence from 2D-Images of Heads from Different Breeds, PLoS ONE, Volume 4 (2009) no. 2, p. e4441 | DOI

[11] Crump, A.; Arnott, G.; Bethell, E. J. Affect-Driven Attention Biases as Animal Welfare Indicators: Review and Methods, Animals, Volume 8 (2018) no. 8, p. 136 | DOI

[12] Da Costa, A. P.; Leigh, A. E.; Man, M. S.; Kendrick, K. M. Face pictures reduce behavioural, autonomic, endocrine and neural indices of stress and fear in sheep, Proceedings of the Royal Society B: Biological Sciences, Volume 271 (2004) no. 1552, pp. 2077-2084 | DOI

[13] Davidson, G. L.; Butler, S.; Fernández-Juricic, E.; Thornton, A.; Clayton, N. S. Gaze sensitivity: function and mechanisms from sensory and cognitive perspectives, Animal Behaviour, Volume 87 (2014), pp. 3-15 | DOI

[14] Demaria, C.; Thierry, B. Responses to Animal Stimulus Photographs in Stumptailed Macaques (Macaca arctoides), Primates, Volume 29 (1988) no. 2, pp. 237-244 | DOI

[15] Deutsch, J.; Nawroth, C.; Eggert, A. Data, scripts and code Goats who stare at video screens – assessing behavioural responses of goats towards images of familiar and unfamiliar con- and heterospecifics, 2023 | DOI

[16] Fantz, R. L. Visual experience in infants: decreased attention to familiar patterns relative to novel ones, Science, Volume 146 (1964) no. 3644, pp. 668-670 | DOI

[17] Fantz, R. L. Pattern vision in young infants, The Psychological Record, Volume 8 (1958), pp. 43-47 | DOI

[18] Forstmeier, W.; Schielzeth, H. Cryptic multiple hypotheses testing in linear models: Overestimated effect sizes and the winner's curse, Behavioral ecology and sociobiology, Volume 65 (2011), pp. 47-55 | DOI

[19] Friard, O.; Gamba, M. BORIS: a free, versatile open-source event-logging software for video/audio coding and live observations, Methods in Ecology and Evolution, Volume 7 (2016) no. 11, pp. 1325-1330 | DOI

[20] Fujita, K. Species recognition by five macaque monkeys, Primates, Volume 28 (1987) no. 3, pp. 353-366 | DOI

[21] Gao, J.; Adachi, I.; Tomonaga, M. Chimpanzees (Pan troglodytes) detect strange body parts: an eye-tracking study, Animal Cognition, Volume 25 (2022), pp. 807-819 | DOI

[22] Gheusi, G.; Bluthé, R. M.; Goodall, G.; Dantzer, R. Social and individual recognition in rodents: Methodological aspects and neurobiological bases, Behavioural Processes, Volume 33 (1994) no. 1-2, pp. 59-87 | DOI

[23] Ghirlanda, S.; Enquist, M. A century of generalization, Animal Behaviour, Volume 66 (2003) no. 1, pp. 15-36 | DOI

[24] Halekoh, U.; Højsgaard, S. A Kenward-Roger Approximation and Parametric Bootstrap Methods for Tests in Linear Mixed Models – The R Package pbkrtest, Journal of Statistical Software, Volume 59 (2014), pp. 1-32 | DOI

[25] Harlow, H. F. The formation of learning sets, Psychological review, Volume 56 (1949) no. 1, pp. 51-65 | DOI

[26] Kaminski, J.; Riedel, J.; Call, J.; Tomasello, M. Domestic goats, Capra hircus, follow gaze direction and use social cues in an object choice task, Animal Behaviour, Volume 69 (2005) no. 1, pp. 11-18 | DOI

[27] Kano, F.; Call, J. Cross-species variation in gaze following and conspecific preference among great apes, human infants and adults, Animal Behaviour, Volume 91 (2014), pp. 137-150 | DOI

[28] Kavšek, M.; Bornstein, M. H. Visual habituation and dishabituation in preterm infants: A review and meta-analysis, Research in Developmental Disabilities, Volume 31 (2010) no. 5, pp. 951-975 | DOI

[29] Keil, N. M.; Imfeld-Mueller, S.; Aschwanden, J.; Wechsler, B. Are head cues necessary for goats (Capra hircus) in recognising group members?, Animal Cognition, Volume 15 (2012) no. 5, pp. 913-921 | DOI

[30] Kendrick, K. M.; Atkins, K.; Hinton, M. R.; Broad, K. D.; Fabre-Nys, C.; Keverne, B. Facial and vocal discrimination in sheep, Animal Behaviour, Volume 49 (1995) no. 6, pp. 1665-1676 | DOI

[31] Kendrick, K. M.; da Costa, A. P.; Leigh, A. E.; Hinton, M. R.; Peirce, J. W. Sheep don't forget a face, Nature, Volume 414 (2001) no. 6860, pp. 165-166 | DOI

[32] Kohda, M.; Jordan, L. A.; Hotta, T.; Kosaka, N.; Karino, K.; Tanaka, H.; Taniyama, M.; Takeyama, T. Facial Recognition in a Group-Living Cichlid Fish, PLoS One, Volume 10 (2015) no. 11, p. e0142552 | DOI

[33] Krupenye, C.; Kano, F.; Hirata, S.; Call, J.; Tomasello, M. Great apes anticipate that other individuals will act according to false beliefs, Science, Volume 354 (2016) no. 6308, pp. 110-114 | DOI

[34] Langbein, J.; Nürnberg, G.; Manteuffel, G. Visual discrimination learning in dwarf goats and associated changes in heart rate and heart rate variability, Physiology and Behavior, Volume 82 (2004) no. 4, pp. 601-609 | DOI

[35] Langbein, J.; Moreno-Zambrano, M.; Siebert, K. How do goats “read” 2D-images of familiar and unfamiliar conspecifics?, Frontiers in Psychology, Volume 14 (2023), p. 1089566 | DOI

[36] Lansade, L.; Colson, V.; Parias, C.; Trösch, M.; Reigner, F.; Calandreau, L. Female horses spontaneously identify a photograph of their keeper, last seen six months previously, Scientific Reports, Volume 10 (2020), p. 6302 | DOI

[37] Leinwand, J. G.; Fidino, M.; Ross, S. R.; Hopper, L. M.; Hopper, L. M. Familiarity mediates apes’ attentional biases toward human faces, Proceedings of the Royal Society B: Biological Sciences, Volume 289 (2022) no. 1973, p. 20212599 | DOI

[38] Lombardi, C. M. Matching and oddity relational learning by pigeons (Columba livia): transfer from color to shape, Animal Cognition, Volume 11 (2008) no. 1, pp. 67-74 | DOI

[39] Lüdecke, D.; Ben-Shachar, M.; Patil, I.; Waggoner, P.; Makowski, D. performance: An R Package for Assessment, Comparison and Testing of Statistical Models, Journal of Open Source Software, Volume 6 (2021) no. 60, p. 3139 | DOI

[40] Méary, D.; Li, Z.; Li, W.; Guo, K.; Pascalis, O. Seeing two faces together: preference formation in humans and rhesus macaques, Animal cognition, Volume 17 (2014) no. 5, pp. 1107-1119 | DOI

[41] Nawroth, C.; McElligott, A. G. Human head orientation and eye visibility as indicators of attention for goats (Capra hircus), PeerJ, Volume 5 (2017), p. e3073 | DOI

[42] Nawroth, C.; von Borell, E.; Langbein, J. ‘Goats that stare at men’: dwarf goats alter their behaviour in response to human head orientation, but do not spontaneously use head direction as a cue in a food-related context, Animal Cognition, Volume 18 (2015), pp. 65-73 | DOI

[43] Peirce, J. W.; Leigh, A. E.; DaCosta, A. P. C.; Kendrick, K. M. Human face recognition in sheep: lack of configurational coding and right hemisphere advantage, Behavioural Processes, Volume 55 (2001) no. 1, pp. 13-26 | DOI

[44] Peirce, J. W.; Leigh, A. E.; Kendrick, K. M. Configurational coding, familiarity and the right hemisphere advantage for face recognition in sheep, Neuropsychologia, Volume 38 (2000) no. 4, pp. 475-483 | DOI

[45] Pokorny, J. J.; de Waal, F. B. M. Monkeys recognize the faces of group mates in photographs, Proceedings of the National Academy of Sciences of the United States of America, Volume 106 (2009) no. 51, pp. 21539-21543 | DOI

[46] Racca, A.; Amadei, E.; Ligout, S.; Guo, K.; Meints, K.; Mills, D. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris), Animal Cognition, Volume 13 (2010) no. 3, pp. 525-533 | DOI

[47] Rault, J.-L. Friends with benefits: Social support and its relevance for farm animal welfare, Applied Animal Behaviour Science, Volume 136 (2012) no. 1, pp. 1-14 | DOI

[48] Rivas-Blanco, D.; Monteiro, T.; Virányi, Z.; Range, F. Going back to ‘basics’: Harlow’s learning set task with wolves and dogs, https://www.biorxiv.org/content/10.1101/2023.03.20.533465v1, 2023 | DOI

[49] Schaffer, A.; Caicoya, A. L.; Colell, M.; Holland, R.; Ensenyat, C.; Amici, F. Gaze Following in Ungulates: Domesticated and Non-domesticated Species Follow the Gaze of Both Humans and Conspecifics in an Experimental Context, Frontiers in Psychology, Volume 11 (2020) | DOI

[50] Shank, C. C. Some Aspects of Social Behaviour in a Population of Feral Goats (Capra hircus L.), Zeitschrift für Tierpsychologie, Volume 30 (2010) no. 5, pp. 488-528 | DOI

[51] Shepherd, S. V.; Platt, M. L. Spontaneous social orienting and gaze following in ringtailed lemurs (Lemur catta), Animal Cognition, Volume 11 (2008) no. 1, pp. 13-20 | DOI

[52] Steckenfinger, S. A.; Ghazanfar, A. A. Monkey visual behavior falls into the uncanny valley, Proceedings of the National Academy of Sciences, Volume 106 (2009) no. 43, pp. 18362-18366 | DOI

[53] Tanaka, M. Development of the visual preference of chimpanzees (Pan troglodytes) for photographs of primates: effect of social experience, Primates; Journal of Primatology, Volume 48 (2007) no. 4, pp. 303-309 | DOI

[54] Taylor, A. A.; Davis, H. The Response of LLamas (Lama Glama) to Familiar and Unfamiliar Humans, International Journal of Comparative Psychology, Volume 9 (1996) no. 1, pp. 43-50 | DOI

[55] R Core Team R: A language and environment for statistical computing, https://www.R-project.org/, 2022

[56] Thieltges, H.; Lemasson, A.; Kuczaj, S.; Böye, M.; Blois-Heulin, C. Visual laterality in dolphins when looking at (un)familiar humans, Animal Cognition, Volume 14 (2011) no. 2, pp. 303-308 | DOI

[57] Tibbetts, E. A. Visual signals of individual identity in the wasp Polistes fuscatus, Proceedings of the Royal Society of London. Series B: Biological Sciences, Volume 269 (2002) no. 1499, pp. 1423-1428 | DOI

[58] Tibbetts, E. A.; Dale, J. Individual recognition: it is good to be different, Trends in Ecology & Evolution, Volume 22 (2007) no. 10, pp. 529-537 | DOI

[59] Tulving, E.; Kroll, N. Novelty assessment in the brain and long-term memory encoding, Psychonomic Bulletin & Review, Volume 2 (1995) no. 3, pp. 387-390 | DOI

[60] Veissier, I. Gazing behaviour as a tool to study goat cognition, Peer Community in Animal Science, Volume 1 (2024), p. 100267 | DOI

[61] Völter, C. J.; Huber, L. Expectancy Violations about Physical Properties of Animated Objects in Dogs, Proceedings of the Annual Meeting of the Cognitive Science Society, Volume 43 (2021) no. 43, pp. 2602-2608

[62] Wilson, V. A. D.; Bethell, E. J.; Nawroth, C. The use of gaze to study cognition: limitations, solutions, and applications to animal welfare, Frontiers in Psychology, Volume 14 (2023), p. 1147278 | DOI

[63] Winters, S.; Dubuc, C.; Higham, J. P. Perspectives: The Looking Time Experimental Paradigm in Studies of Animal Visual Perception and Cognition, Ethology, Volume 121 (2015) no. 7, pp. 625-640 | DOI

[64] Zayan, R.; Vauclair, J. Categories as paradigms for comparative cognition, Behavioural Processes, Volume 42 (1998) no. 2-3, pp. 87-99 | DOI


block.super