Keynote Speakers

João Manuel R. S. Tavares

Faculdade de Engenharia da Universidade do Porto, Portugal

Biomedical Imaging Segmentation: from thresholding to deep learning based methods


João Manuel R. S. Tavares graduated in Mechanical Engineering at the Universidade do Porto, Portugal in 1992. He also earned his M.Sc. degree and Ph.D. degree in Electrical and Computer Engineering from the Universidade do Porto in 1995 and 2001, and attained his Habilitation in Mechanical Engineering in 2015. He is a senior researcher at the Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial (INEGI) and Full Professor at the Department of Mechanical Engineering (DEMec) of the Faculdade de Engenharia da Universidade do Porto (FEUP).Read More



The segmentation of biomedical images by computational methods is very challenging, and it is mostly undertaken using thresholding, deformable models built on statistical, geometrical or physical principles, and/or machine learning based approaches. Examples of current applications of segmentation methods include the identification of skin lesions, lungs, heart, prostate, liver, blood vessels, brain, ear, and related structures, just to name a few. In this lecture, algorithms that we have developed to segment images acquired using different biomedical imaging modalities will be described and their use in different applications discussed.

Constantine Dovrolis

Georgia Institute of Technology (Georgia Tech)

If the brain is a very sparse network, why does deep learning use dense neural networks?


Dr. Constantine Dovrolis is a Professor at the School of Computer Science at the Georgia Institute of Technology (Georgia Tech). He is a graduate of the Technical University of Crete (Engr.Dipl. 1995), University of Rochester (M.S. 1996), and University of Wisconsin-Madison (Ph.D. 2000). His research combines Network Science, Data Mining and Machine Learning with applications in climate science, biology, neuroscience, sociology and machine learning. More recently, his group has been focusing on neuro-inspired architectures for machine learning based on what is currently known about the structure of brain networks.


What we know from neuroscience (“connectomics”) is that the brain is, overall, a very sparse network with relatively small locally dense clusters of neurons. These topological properties are crucial for the brain’s ability to perform efficiently, robustly, and to process information in a hierarchically modular manner. On the other hand, the artificial neural networks we use today are very dense, or even fully connected, at least between successive layers. Additionally,
it is well known that deep neural networks are highly over-parameterized: pruning studies have shown that it is often possible to eliminate 90% of the connections (weights) without significant loss in performance. Pruning, however, is typically performed after the dense network has been trained, which only improves the run-time efficiency of the inference process. The previous points suggest that we need methods to design sparse neural networks, without any training, that can perform almost as well as the corresponding dense networks after training. This talk will first provide some background in the pruning literature, either after training or before training. Then, we will present a recently proposed (ICML 2021) method called PHEW (Paths with Higher Edge Weights) which creates sparse neural networks,
before training, and that can learn fast and generalize well. Additionally, PHEW does not require access to any data as it only depends on the initial weights and the topology of the given network architecture.