Loading...
Please wait, while we are loading the content...
Similar Documents
VISUAL-TACTILE CO-LOCATION IN INFANTS 1 RUNNING HEAD : VISUAL-TACTILE CO-LOCATION IN INFANTS Perception of visual-tactile co-location in the first year of life
| Content Provider | Semantic Scholar |
|---|---|
| Author | Freier, Livia Mason, Luke Bremner, Andrew J. |
| Copyright Year | 2016 |
| Abstract | An ability to perceive tactile and visual stimuli in a common spatial frame of reference is a crucial ingredient in forming a representation of one’s own body and the interface between bodily and external space. In this study we investigated young infants’ abilities to perceive co-location between tactile and visual stimuli presented on the hands. We examined infants’ visual preferences for spatially congruent and incongruent visual-tactile events across two age groups (6-months and 10-month). We observed increased duration of looking to incongruent stimuli displays in both age groups, indicating that infants from at least 6 months of age demonstrate the ability to determine whether simultaneously presented visual-tactile perceptual events are co-located or not. These findings indicate that an ability to perceive visual and tactile stimuli within a common spatial frame of reference is available by the end of the first half year of life. VISUAL-TACTILE CO-LOCATION IN INFANTS 3 A paradigmatic question for philosophers and developmental scientists alike concerns whether human infants are able to perceive space amodally – i.e., whether they can build a common representation of space independent of the particular input modality (e.g., Eilan, 1993; Meltzoff, 1993). Research with human adults and animals has shown that when (and when not) stimuli to different sense modalities originate from a common location in external space, this has important implications for neural processing and behaviour (e.g., Meredith & Stein, 1996; Stein & Stanford, 2008; Spence & Driver, 2004; Wallace, Roberson, Hairston, Stein, Vaughan, & Schirillo, 2004). Adults perceive and make use of spatial commonalities across the senses in a seemingly effortless manner. However, given the substantial differences between adults and infants in the degree and quality of their multisensory experience we cannot assume that infants possess the same ability to represent multisensory space. It is now common to argue that spatial properties of the environment (e.g., shape and place) number among a range of “amodal” sensations which are specified in a redundant manner across modalities (e.g., Bahrick & Lickliter, 2000; Gibson, 1969; Walker-Andrews, 1994). Developmental scientists have been by no means idle when it comes to addressing the question of how infants and young children develop in their ability to perceive such aspects of the multisensory environment (for a recent review see Bahrick & Lickliter, 2012; although note that there is some disagreement concerning what counts as an amodal property, Lewkowicz & Kraebel, 2004). However, multisensory perceptual development has been primarily investigated via investigations of infants’ learning about crossmodal temporal relations (typically in the auditory and visual modalities). For instance, it has been demonstrated that an ability to detect audiovisual synchrony and intensity (loudness matched with brightness) emerges early in infancy (e.g., Bahrick, 1992; Bahrick, Flom & Lickliter, VISUAL-TACTILE CO-LOCATION IN INFANTS 4 2002; Lewkowicz, 1996; Lewkowicz, 2000; Lewkowicz & Turkewitz, 1980; Spelke, 1976). Similarly it has been suggested that synchrony between sound and vision (Lewkowicz, Leo, & Simion, 2010), and vision and touch (Filippetti, Johnson, Lloyd-Fox, Dragovic, & Farroni, 2013) may be readily perceived from the moment of birth, with even some crossmodal temporal links available prenatally in some non-human species such as bobwhite quails (Jaime, Bahrick, & Lickliter, 2010). Investigations of the development of an ability to perceive spatial commonalities across the senses are less frequent in the literature. Some studies suggest that very young infants can notice whether or not sounds and sights are co-located in external space, but such findings have not always been easy to replicate (e.g., Aronson & Rosenbloom, 1971; McGurk & Lewis, 1974; Morrongiello, Fenwick, & Chance, 1998). Scant research has examined infants’ abilities to detect and represent common spatial aspects of visual-tactile stimulation. This is surprising given the importance of linking visual and tactile events for the development of a coherent perception of the body and the embodied environment (Bremner & Cowie, 2013). Research efforts in this area have focused primarily on infants’ recognition of shapes and textures of objects across unimodal presentations (crossmodal transfer tasks; see Streri, 2012), and generally support the notion that the ability to match shape and texture between tactile and visual modalities is an early acquired skill (e.g., Abravanel, 1981; Bryant, Jones, Claxton, & Perkins, 1972; Rose, 1994), that is even present in newborns albeit in a limited way (e.g., Sann & Streri, 2007; Streri & Gentaz, 2003, 2004). However, because the spatial matches in crossmodal transfer tasks are “field independent”, they do not require an ability to locate features within a common spatial frame of reference, and therefore do not indicate whether infants perceive such multisensory events in an external (or even a peripersonal) spatial environment (e.g., Bremner & Cowie, 2013; Eilan, 1993). VISUAL-TACTILE CO-LOCATION IN INFANTS 5 One way to investigate the extent to which participants can coordinate representations of space across the senses is via crossmodal orienting responses. Visual orienting responses to sounds have been reported in newborns (e.g., Butterworth & Castillo, 1976; Wertheimer, 1961) although with an extended developmental trajectory throughout the first months (Clifton, Morrongiello, Kulig, & Dowd, 1981; Muir & Field, 1979). Whilst both reflexive orienting of the head to touch (e.g., Fényes, Gergely, & Tóth, 1960; Sherrington, 1910; Zappella & Simopoulos, 1966) and habituation of head turning to touch (e.g., Moreau, Helfgott, Weinstein, & Milner, 1978) have been reported, oculomotor responses to tactile events are surprisingly infrequent in young infants. Bremner, Mareschal, Lloyd-Fox, & Spence (2008) investigated visual orienting to vibrotactile stimuli in 6and 10-month-olds by presenting stimuli unpredictably to each hand. Only the 10-month-olds in this study were able to consistently orient towards the stimulated location, indicating that the ability to coordinate visual and tactile frames of reference undergoes significant development throughout the first year of life. In many ways it seems unsurprising that infants may struggle to translate between visual and tactile frames of reference is as, in computational terms, this is not a trivial problem. Adult humans and primates are equipped with neural circuits that continuously update correspondences between visual and tactile spatial frameworks across changes in posture (e.g., Azañón & Soto-Faraco, 2008; Graziano, Gross, Taylor, & Moore, 2004; Lloyd, Shore, Spence, & Calvert, 2002; Rigato, Bremner, Mason, Pickering, Davis, & Van Velzen, 2013). Indeed, as a result of eye movements, these computational challenges also impinge on our ability to detect co-location between visual and auditory stimuli (Pöppel, 1973). Research with infants indicates that an ability to incorporate information about posture into sensory processing develops gradually across the first year of life (e.g., Bremner et al., 2008; Rigato, VISUAL-TACTILE CO-LOCATION IN INFANTS 6 Begum Ali, Van Velzen, & Bremner, 2014), and continues even into early childhood (e.g., Begum Ali, Cowie, & Bremner, 2014; Pagel, Heed, & Röder, 2009). It is especially pertinent to the current investigation that these challenges posed by variations in body and limb posture are particularly complex across ontogenetic development. Not only do the relative sizes and shapes of the limbs, body, and head change rapidly even from day to day (Lampl, Veldhuis, & Johnson, 1992) but, additionally, the number and variety of postural changes which an infant can readily and spontaneously execute become increasingly complex with age (e.g., Van Hof et al., 2002). As outlined above, infants do not easily locate unimodal tactile stimuli via visual orienting responses until 10 months of age (Bremner et al., 2008). This may be explained by the extended development up to 10 months of age of an ability to take account of posture when locating tactile stimuli (Bremner et al., 2008; Rigato et al., 2014). Thus, the ability to perceive tactile and visual stimuli within a common external spatial frame of reference may develop slowly in the first year, implying a state of tactile solipsism in the first months of life. However, no research has yet investigated the development of an ability to register visualtactile co-location in early infancy. Indeed, it is possible that the presence of a distinct visual stimulus (which was not present to guide the crossmodal orienting responses to tactile stimuli studied by Bremner et al., 2008) may aid infants in determining the spatial frame of reference within which to locate a tactile stimulus. Thus, we investigated the ability to perceive visualtactile spatial co-location in 6and 10-month-old infants. --Insert Figure 1 about here-We investigated whether infants would show a spontaneous visual preference for spatially congruent or incongruent visual-tactile stimulus pairs presented on the hands. Infants’ looking behaviour was compared in response to two sets of stimulus combinations VISUAL-TACTILE CO-LOCATION IN INFANTS 7 (see Fig. 1): (1) an incongruent condition in which visual and tactile stimuli were presented concurrently on different hands, and (2) a congruent condition in which the visual and tactile stimuli were presented together on the same hand. Stimuli in both congruent and incongruent presentations always alternated between hands. This step was taken in order to prevent influences on looking behaviour from proximal aspects of the stimulation (e.g., a preference for visual stimuli on the right hand). Because the only variation between conditions concerns the common or separate locations of v |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | http://research.gold.ac.uk/18138/1/CoLocation_GRO.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |