Skip to main content
52nd NORDTEK conference

Speakers

Yoonsuck Choe

Meaning and Consciousness in Brain and Artificial Intelligence

Read bio

Yoonsuck Choe received his B.S. degree in computer science from Yonsei University, Seoul, Korea, in 1993, and his M.S. and Ph.D. degrees in computer sciences from the University of Texas at Austin, Austin, TX, USA, in 1995 and 2001. He is a Professor and Director of the Brain Networks Laboratory in the Department of Computer Science and Engineering at Texas A&M University.

During 2017-2019, he led the machine learning lab, and subsequently the AI core team at Samsung Research AI Center (corporate vice president). His research interest is broadly in computational neuroscience, deep learning, evolutionary algorithms, and neuroimaging/ neuroinformatics. He has published extensively in the above areas with over 140 publications that include two best paper awards and one best paper award nomination. He served as the program chair for the International Joint Conference on Neural Networks in 2015, and as the general chair in 2017.

He currently serves on the editorial board of Neural Networks (2024-present) and IEEE Transactions on Cognitive and Developmental Systems (2023-present).

Read abstract

In this talk, I will discuss fundamental questions in brain science and AI: meaning and consciousness. I will demonstrate through simple computational experiments how taking an internal perspective can help us get to the essence of the problem. With this perspective, we will find that sensorimotor interaction is key to meaning and understanding, and predictive neural dynamics plays an important role in consciousness.

Alla Anohina-Naumeca

Adoption of AI tools in engineering education in Latvia

Read bio

Alla Anohina-Naumeca received Dr.sc.ing. in information technology from Riga Technical University, Latvia, in 2007. She defended her second PhD in pedagogy at the University of Latvia in 2018. She is a professor and the Vice-Dean for Academic Affairs at the Faculty of Computer Science, Information Technology and Energy of Riga Technical University in Latvia.

She has twenty years of teaching experience, more than 80 scientific publications, and more than 30 research projects in the fields of computer science, artificial intelligence, education improvement, and educational software. She teaches courses on artificial intelligence for both IT-field students and non-IT-field students.

In 2020, she participated in the faculty training program at the School of Engineering and Applied Sciences, The University at Buffalo (USA) by taking study courses on machine learning and engineering education and, for the last two years, worked as an adjunct professor at the Norwegian University of Science and Technology, Ålesund, Norway. Her scientific interests include computer science (especially artificial intelligence), pedagogy and academic integrity and the intersections of the mentioned research fields.

Michael Kampffmeyer

Towards Explainable Deep Learning Models

Read bio

Michael Kampffmeyer is an Associate Professor at UiT The Arctic University of Norway. He is also a Senior Research Scientist II at the Norwegian Computing Center in Oslo. His research interests include medical image analysis, explainable AI, and learning from limited labels (e.g. clustering, few/zero-shot learning, domain adaptation and self-supervised learning).

Kampffmeyer received his PhD degree from UiT in 2018. He has had long-term research stays in the Machine Learning Department at Carnegie Mellon University and the Berlin Center for Machine Learning at the Technical University of Berlin. He is a general chair of the annual Northern Lights Deep Learning Conference, NLDL.

For more details visit https://sites.google.com/view/michaelkampffmeyer/.

Read abstract

Despite the significant advancements deep learning models have brought to solving complex problems in the real world, their lack of transparency remains a significant barrier. This has led to an increased focus on explainable artificial intelligence (XAI), which seeks to demystify a model’s decisions to increase trustworthiness. Within the XAI domain, two primary approaches have emerged: one that retroactively explains a model’s predictions (post-hoc explanations), and another that integrates explanations into the model’s output (self-explanations). In this talk, we will delve into the latter, highlighting the development and potential of inherently self-explanatory models.

Michael Felsberg

Computer Vision - Machine Learning Meets Applied Mathematics

Read bio

Michael Felsberg received the PhD degree from Kiel University, Germany, in 2002, and the docent degree from Linköping University, in 2005. He is a full professor with Linköping University, Sweden.

His research interests include, besides visual object tracking, video object and instance segmentation, classification, segmentation, and registration of point clouds, as well as efficient machine learning techniques for incremental, few-shot, and long-tailed settings.

Read abstract

The field of computer vision is a sub area of AI, and it has its roots in the modeling of the human visual system (HVS). It is commonly accepted that about 80% of what we perceive is vision-based (Ripley and Politzer 2010), but modeling vision is a systematically underestimated scientific challenge – an implication of Moravec’s paradox, “we’re least aware of what our minds do best” (Minsky 1986). The highly intuitive nature of the HVS makes it difficult for us to understand the myriad of interdisciplinary problems associated with computer vision.

The research at the Computer Vision Laboratory (CVL) has a strong focus on theory and methods, in particular within machine learning, signal processing, and applied mathematics. The resulting methods are applied in fields where technical systems are supposed to coexist with and therefore predict actions of humans. Self-driving cars sharing road space and interacting with humans, sustainable forestry and agriculture, monitoring of greenhouse gases as well as classification and monitoring of animals are some application domains.

My talk will cover a wide range of topics within machine learning for computer vision and robot perception: Few-shot and weakly supervised learning, geometric deep learning, semi-supervised and incremental learning, scene flow estimation, uncertainty representation, as well as video and semantic segmentation.

Filip Ginter

The Promise and Challenges of Generative AI from an NLP Researcher's Perspective

Read bio

Filip Ginter is professor of Language Technology and one of the founding members of the TurkuNLP research group. He specializes in Natural Language Processing (NLP), particularly from the perspective of machine learning and very large textual data collections.

His research spans from building fundamental training datasets, through model development and training for essential text structure and meaning analysis tasks, to NLP applied at a very large scale in a supercomputing environment. Within these areas, he focuses on multilingual methods, basic NLP development for Finnish and other small languages, and NLP applications to noisy historical data collections.

Filip co-authored some 150 peer-reviewed articles on the topic in NLP conferences and journals, and is also active in various roles in doctoral education and research administration.

Laura Ruotsalainen

AI for sustainable industrial automation

Read bio

Laura Ruotsalainen is a Professor of Spatiotemporal Data Analysis for Sustainability Science at the Department of Computer Science at the University of Helsinki, Finland. She leads a research group in spatiotemporal data analysis for sustainability science (SDA) which performs research on estimation, Machine Learning and Computer Vision methods using spatiotemporal data for sustainable smart cities especially via smart mobility. She is a member of the steering group of the Finnish Center for AI (FCAI) and leads a FCAI Highlight area called AI for Sustainability. She is also a professor at the Helsinki Institute of sustainability Science (HELSUS), which is a cross-faculty research unit in sustainability science within the University of Helsinki.

Oshani Weerakoon

AI tools from the perspective of a student

Read bio

I currently pursue the second year of my master’s in Software Engineering at the University of Turku. I also work as a Research Assistant at the Software Engineering unit of the Department of Computing. My research areas of interest are Generative AI, AI in Education and Software Usability.

Linda Mannila

AI – from computation to societal impact

Read bio

Linda Mannila is an Associate Professor specializing in the societal aspects of AI at the Department of Computer Science, University of Helsinki, Finland. She also holds the position of Adjunct Associate Professor in Computer Science Education at Linköping University, Sweden. Her research explores public perceptions of AI as well as AI in education, with particular focus on AI literacy from a student, teacher, and organizational perspective. She also leads NordicEdAI, a cross-Nordic interdisciplinary network dedicated to advancing AI in education.

Jussi Puura

Manager, Disruptive Technologies and Future Visions, Sandvik Mining and Rock Solutions

Teijo Hiito

Digital Product Manager, Wärtsilä Finland Oy