«A formal, methodical approach to artificial intelligence suits me well»

March 05, 2024

A formal, methodical approach to artificial intelligence (AI) suits me well,» says Rustam Galimullin. «This means that before we can safely employ robots in more open contexts, we must teach them at least a semblance of social intelligence. An essential aspect of social intelligence is the ability to communicate with others and act based on new information. Mathematical models come in handyReason number two:Enhancing the social intelligence of robots is just one aspect of the job. So, what is it like to be a postdoctoral researcher at UiB?

«I have always been fascinated by the unknown aspects of artificial consciousness and the boundaries within. A formal, methodical approach to artificial intelligence (AI) suits me well,» says Rustam Galimullin.

He is a postdoctoral researcher at the Department of Information Science and Media Studies (UiB).

Though his research is abstract, it carries practical implications for artificial intelligence practices. Galimullin creates mathematical models to assess whether future robots can communicate more effectively than current ones.

Making robots socially intelligent 

Why invest time and research funds in something like that, some might ask. Galimullin believes there are at least two compelling reasons.

Reason number one:

Current robots excel at solving well-defined tasks with clear objectives in limited physical environments. They also excel at what humans find challenging. Already in 1997, a chess computer defeated the world champion in chess. However, what robots currently struggle with is handling situations that require the ability to interpret social context and see the world from others' perspectives.

«This means that before we can safely employ robots in more open contexts, we must teach them at least a semblance of social intelligence. An essential aspect of social intelligence is the ability to communicate with others and act based on new information. Therefore, it is crucial to determine how we can program robots to become better communicators,» says Galimullin.

Mathematical models come in handy

Reason number two:

Enhancing the social intelligence of robots is just one aspect of the job. Researchers must also test the functionality and safety of the robots.

«Before we release them into the world, we must ensure that they operate as intended. We need to know they are capable of performing the tasks we assign them and, most importantly, that it is safe to have them interact with humans. This validation process can be accomplished using mathematical models,» Galimullin explains.

The postdoctoral researcher highlights several instances where robots currently lack street-smarts:

For instance, there is a case from a Danish hospital where sophisticated and expensive robots, intended to navigate large sections of the hospital premises for various tasks, had to be parked. They struggled to cope with the unpredictability of human behavior, leading to potentially hazardous situations.

An autonomous role

Originally from Russia, Galimullin specializes in symbolic AI and blockchain technology.

So, what is it like to be a postdoctoral researcher at UiB?

«It's both exciting and challenging. I appreciate the freedom to cultivate my research interests without the pressure of a looming dissertation deadline, says Galimullin. He continues - As a postdoctoral researcher, I've expanded my professional network, explored new research directions, and gained valuable insights into academia».

Communication skills are crucial

The mathematical models created by Galimullin are used to determine whether robots — or more precisely, their computer programs —can address the communication challenges they will encounter.

He emphasizes that we are largely discussing programs designed for hypothetical robots that have not yet been constructed.

«We can already develop robust models to assess whether robots can effectively communicate with each other to solve a given task. However, we observe that communication becomes significantly more challenging for robots when we consider social settings that also involve humans,» says Galimullin.

As an example, he highlights a scenario in which a robot detects an impending dangerous situation that the human with whom the robot is interacting is unaware of.

«Instilling in the robot an understanding that humans may not share the same level of information can be quite tricky. There are multiple layers of situational understanding involved, and it is challenging to convey to the robot that the human may not have observed the same information and, as a result, must be warned about the imminent danger,» Galimullin explains.

The source of this news is from University of Bergen

Popular in Research

1

Apr 6, 2024

Conspiracy theory runs wild linking New York City’s 4.8-magnitude earthquake to date of solar eclipse

2

Apr 4, 2024

Five Sydney researchers honoured by Australian Academy of Science

3

Apr 9, 2024

The rise of Dawn

4

Apr 1, 2024

VIU researcher exploring tourism resilience in rural BC communities

5

Apr 4, 2024

Conservative activist Charlie Kirk helped oust Ronna McDaniel at the RNC. Now the knives are out for him.

Fight or flight: Fearful Trump critics weigh the risk of retribution if he's re-elected

1 day ago

MSN

1 day ago

Silence broken on gender pay gaps but we must hold organisations to account

1 day ago

Nasdaq Futures Up 2% as Nvidia Powers Global Rally: Markets Wrap

Apr 8, 2024

Investigating and preserving Quechua

18 hours ago

Biden visits his Pennsylvania hometown to call for more taxes on the rich and cast Trump as elitist

18 hours ago