Three Topics in Computer Science
Flexible Task Allocation in a Mixed Multi-Agent System
Imagine a team consisting of cameras, humans, drones and ground vehicles working on surveillance tasks - of a large area, such as an airport. Drones are patrolling the perimeter, stationary cameras observe critical sectors. When some irregularity is observed, the stationary cameras cannot follow, cannot look with more detail, but it can call for help from its team mates. Conceiving the complete team as a multiagent system, allows to flexibly apply known protocols to allocate tasks - such as move to the area of the alarm and scan there.
The thesis shall develop and implement a multiagent team that includes one of more humans for surveillance tasks. This contains the following sub-tasks
* developing a uniform approach for communication by the very different agents, that means basically enabling every agent to send and read messages expressed in the same ACL (Agent Communication Language)
* provide the agents with a possibility to participate in a version of the Contract Net Protocol for handling surveillance tasks. One element of this protocol is that every agent can evaluate how much it would cost it to do the task.
* provide a user interface for the human agents, so that they can participate in the Contract Net Protocol.
Depending on the interest of the student(s), the emphasis could be different. We are particularly interested in the development of the user interface.
Suitable for 2 HIng students collaborating, but can be also done by just one student in a reduced form.
Supervised by Franziska Klügl
Agent-Based Simulation of Media Usage in Crisis Situations
While authorities assume that citizens use public service media, the real world might look very different.
It is unclear whether citizens will de facto follow authorities’ advice is highly uncertain, considering today’s hyper-individualized, transnational, and polarized media landscape.
This exam is a pilot study together with researchers from Media- and communication studies.
The task of the exam worker(s) is to develop and implement an agent-based simulation model that abstracts
the Swedish media system, simulate and visualize different crisis scenarios, including antagonistic actors.
In different what-if simulations of risk scenarios, you will show how information is disseminated and what effect changes in transmission, additional conspiracy theories, reinforcement of specifically dramatic information, etc. have on what information actually reaches the population. Result visualization plays an important role.
Suitable for 2 HIng students collaborating, but can be also done by just one student in a reduced form.
Supervised by Franziska Klügl with support from Prof. Peter Berglez (Media- and Communication Studies)
Tool or Collaborator? -- Human Perception of LLM-based agents in Human-AI Teams
Human-AI Teaming is currently one of the hottest topic in AI and Multiagent Research.
There are more and more publications on how to develop good multi-agent workflows, what qualities a AI-based team mate shall possess. In this exam work, we want to change the perspective: What properties of an interaction, e.g. pro-activeness, latency, properties of the delegated task, make a human perceive an LLM-based agent as a partner rather than a tool. This has consequences on how the team actually functions.
What needs to be done?
* look at the relevant literature to identify candidates for interaction properties
* develop and implement a test scenario in which a human interacts with one or more LLM-based agents to solve a task. It should be also possible to measure performance on the task (how well or how fast is the goal achieved).
* perform a experimental study using the test scenario on a number of humans
Suitable for 1-2 HIng students collaborating.
Supervised by Franziska Klügl
Annonsuppgifter
Annonsör: Örebro universitet
Ansök senast:
Annonskategori: Examensarbete, praktik, uppsats
Intresseområde: Data och IT, Teknik och matematik
Kontaktperson: Franziska Klügl franziska.klugl@oru.se
Webbsida: https://www.oru.se/