Martin Magnusson - new professor 2026

Martin Magnusson is professor of computer science. Photo: Jerry Gray
Martin Magnusson is professor of computer science. His research focuses on how self-driving robots can understand, use and trust maps of their surroundings in complex environments.
“A key question is how a robot can determine what it knows – and what it doesn’t know – about the world around it,” he says.
1977 Born in Karlskoga, Sweden
2009 Obtained his PhD in computer science with his thesis The three-dimensional normal-distributions transform: an efficient representation for registration, surface analysis, and loop detection, Örebro University
2018 Docent in computer science, Örebro University
2025 Professor of computer science, Örebro University
Already when he was a computer science student at Uppsala University, Martin Magnusson knew that he wanted to do “something with AI” and he took the opportunity to apply to Örebro University when a PhD position with a robotics theme was announced.
“Getting the chance to research intelligent systems that can also do something for real, and not just AI in a computer, seemed perfect,” he says.
His research today revolves around how to describe maps or models of the world in a way that is useful for self-driving robots. How can autonomous systems in complex environments create, use, and trust maps and what they perceive around them? This can involve fundamental functionality such as positioning and route planning, but has gradually developed towards more abstract and general representations of space, movement, uncertainty, and human presence.
“My doctoral thesis laid a foundation for this work through the development of a mathematical model for describing 3D data and how it can be used to build maps and subsequently for localisation within them. His research was originally prompted by needs in mining environments, but the method has been adopted on a broad scale and has been used both in academic research and in self-driving machines within industry,” Martin Magnusson explains.
Since his thesis was published in 2009, he has moved on to other aspects of maps for robots beyond the purely geometric ones: What should a map contain in order to be useful in the real world? Can we formally model how a robot can quantify the reliability of its maps and its localisation? How can we describe movement patterns in a map over time, and how can a robot use that to become safer and more efficient—and less in the way?
“Today, I lead the research group Robot Navigation and Perception Lab, where we work on these topics and closely related questions,” he says.
One line of research concerns something researchers often describe as introspection. It deals with a central but often overlooked problem: the robot must “know what it doesn’t know”.
“We work on how the robot can detect that something has gone wrong in the map, and how it can measure – and even present to a user – where in the map it is easier or harder to position itself accurately. If the fundamental question when using a map is ‘Where am I?’, then this is also about ‘Do I trust this map, or is there a possibility that errors occurred in parts of it when it was created?”
Another line of research concerns ways of creating and using maps that represent spatiotemporal movement patterns. In other words: ‘What usually happens here, and what is likely to happen?’
“Besides being able to build such a map and show how well it corresponds to reality – which is fun in itself – we also connect these kinds of maps to planning and prediction. If we know how people tend to move and can predict where a particular person will go next, we can also build systems in which one or several robots plan better routes where collisions can be avoided.”
Another important part of Martin Magnusson’s and his research group’s work right now is robust perception in challenging environments: dust, fog, smoke. Their methods for precise localisation using radar sensors have had major impact. Unlike cameras or laser scanners, radar works in all weather conditions, but does, however, suffer from lower resolution and high noise levels.
“We are now continuing to develop methods for recognising different objects in radar data and combining that with other sensors, such as thermal cameras,” says Martin Magnusson.