Spatial Cognition and Artificial Intelligence: Methods for In-The-Wild Behavioural Research in Visual Perception


Mehul Bhatt – Örebro University – CoDesign Lab EU, Sweden

Vasiliki Kondyli – Orebro university, Sweden


The tutorial on “Spatial Cognition and Artificial Intelligence” addresses the confluence of empirically-based behavioural research in the cognitive and psychological sciences with computationally-driven analytical methods rooted in artificial intelligence and machine learning. This confluence is addressed in the backdrop of human behavioural research concerned with naturalistic, in-the-wild, embodied multimodal interaction. The tutorial presents: Read More

Multiple Object Tracking in Virtual Reality with Eye-tracking and FNIRS for Cognitive Workload


Aleksandar Dimov – BIOPAC Systems Inc, USA

During this workshop we will present the use of Virtual Reality, eye-tracking and FNIRS for the Multiple Object Tracking paradigm. The specific paradigm was chosen to illustrate the integration of the various techniques via an example but general applications will be discussed as well. We will explain experiment design and optimal data collection techniques. Live participant data will be recorded and performance, eye-tracking data and FNIRS (functional near-infrared spectroscopy) will be analyzed so that attendees can see the entire process from start to finish.

Modelling the spatio-temporal properties of eye movements: methods, devices and applications


Alessandro Grillini – Reyedar B.V., Netherlands

Participants should have their own laptops and have access to Python 3 (Jupyter Notebook is also fine) – Please install Google Research Colab in your browser (preferably Chrome).

This tutorial aims to provide a comprehensive overview of SONDA (Standardized Oculomotor Neuro-ophthalmic Disorders Assessment), a powerful yet simple method for analyzing the spatio-temporal properties of eye movements with relevant applications in fundamental and clinical vision research. By integrating insights from experimental research in visual neuroscience, psychophysics, and machine learning, this tutorial will offer participants a unique opportunity to understand better how the SONDA method can be used to study eye movements and their practical application in different clinical contexts. A novel medical device based on this method will be demonstrated during the tutorial. Read More

Visual psychophysics with OpenSesame


Sebastiaan Mathôt, University of  Groningen

In this ECVP tutorial, you will learn how to build a visual-psychophysics experiment in OpenSesame 4.0. You will learn how to display complex visual stimuli (Gabor patches, noise textures, etc.) and how to use a Quest adaptive procedure to maintain equal performance between participants and conditions. You will also learn how to verify the temporal precision of your experiment. The tutorial will focus on the graphical user interface, but there will additional challenges for those of you with Python coding experience.

Please bring your own laptop and install the latest version of OpenSesame 4.0 (not 3.3).

For more information, visit https://osdoc.cogsci.nl/4.0/ecvp2023

Understanding Glare’s transformation of scene luminance into the different pattern on the retinal receptors


Alessandro Rizzi – University of Milano, Italy

John McCann – McCann Imaging, USA

This course connects the measurements of physics with those of psychophysics (visual appearance). Our visual system performs complex-spatial transformations of scene-luminance patterns using two independent spatial mechanisms: optical and neural. First, optical glare transforms scene luminances into a different light pattern on receptors, called here retinal luminances. This tutorial introduces a new Python program that calculates retinal luminances from scene luminances. Equal scene luminances become unequal on the retina. Uniform scene segments become nonuniform retinal gradients; darker regions acquire substantial scattered light; and the retinal range-of-light changes substantially. Read More