Motor interactions with single, as well as pairs of objects can be automatically affected by visual asymmetries provided by protruding parts, whether the handle or not. Faster and more accurate performance is typically produced when task-defined responses correspond to the location of such protruding parts, relative to when they do not correspond (i.e., object-based spatial correspondence effects). In two experiments we investigated the mechanisms that underlie the spatial coding of tool-object pairs when semantic and action alignment relationships were orthogonally combined. Centrally presented pictures of “active” tools (depicted as potentially performing their proper action) were paired, on one side, to a “passive” object (target of the tool action). We observed S-R correspondence effects that depended on the location of the protruding side of tool-object pairs, and not on the non-protruding side of the tool handle. Thus, results further supported the location coding account of the effect, against the affordance activation one. The effect was only produced when tool-object pairs belonged to the same semantic category or were correctly aligned for action, but with no further interplay. This was not consistent with the idea that action links were coded between tool-object pairs, and that the resulting action direction interacted with response spatial codes. Alternatively, we claimed that semantic relation and action alignment acted, independent from each other, as perceptual grouping criteria; allowing for the basic spatial coding of visual asymmetries to take place. This brought to speculation, at neurocognitive level, about independent processing along the ventral and ventro-dorsal streams.
Location Coding of Tool-Object Pairs Based on Perceptual Grouping: Evidence from Object-Based Correspondence Effect
Antonello Pellicano
Ultimo
2025-01-01
Abstract
Motor interactions with single, as well as pairs of objects can be automatically affected by visual asymmetries provided by protruding parts, whether the handle or not. Faster and more accurate performance is typically produced when task-defined responses correspond to the location of such protruding parts, relative to when they do not correspond (i.e., object-based spatial correspondence effects). In two experiments we investigated the mechanisms that underlie the spatial coding of tool-object pairs when semantic and action alignment relationships were orthogonally combined. Centrally presented pictures of “active” tools (depicted as potentially performing their proper action) were paired, on one side, to a “passive” object (target of the tool action). We observed S-R correspondence effects that depended on the location of the protruding side of tool-object pairs, and not on the non-protruding side of the tool handle. Thus, results further supported the location coding account of the effect, against the affordance activation one. The effect was only produced when tool-object pairs belonged to the same semantic category or were correctly aligned for action, but with no further interplay. This was not consistent with the idea that action links were coded between tool-object pairs, and that the resulting action direction interacted with response spatial codes. Alternatively, we claimed that semantic relation and action alignment acted, independent from each other, as perceptual grouping criteria; allowing for the basic spatial coding of visual asymmetries to take place. This brought to speculation, at neurocognitive level, about independent processing along the ventral and ventro-dorsal streams.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.