Displacement or Complement? HKUST Researchers Reveal Mixed-bag Responses in Human Interaction Study with AI | The Hong Kong University of Science and Technology

September 01, 2023

Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. "At this stage, AI is deemed somewhat 'stubborn' by human collaborators, for good and bad,” noted Prof. Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions. Their study was published at the ACM Conference on Human Factors in Computing Systems in April 2023.

Artificial intelligence (AI) is all the rage lately in the public eye. How AI is being incorporated to the advantage of our everyday life despite its rapid development, however, remains an elusive topic that deserves the attention of many scientists. While in theory, AI can replace, or even displace, human beings from their positions, the challenge remains on how different industries and institutions can take advantage of this technological advancement and not drown in it. 

Recently, a team of researchers at the Hong Kong University of Science and Technology (HKUST) conducted an ambitious study of AI applications on the education front, examining how AI could enhance grading while observing human participants’ behavior in the presence of a computerized companion. They found that teachers were generally receptive to AI’s input - until both sides came to an argument on who should reign supreme. This very much resembles how human beings interact with one another when a new member forays into existing territory.

The research was conducted by HKUST Department of Computer Science and Engineering Ph.D. candidate Chengbo Zheng and four of his teammates under the supervision of Associate Professor Prof. Xiaojuan MA.  They developed an AI group member named AESER (Automated Essay ScorER) and separated twenty English teachers into ten groups to investigate the impact of AESER in a group discussion setting, where the AI would contribute in opinion deliberation, asking and answering questions and even voting for the final decision. In this study, designed akin to the controlled “Wizard of Oz” research method, a deep learning model and a human researcher would form joint input to AESER, which would then exchange views and conduct discussions with other participants in an online meeting room.

While the team expected AESER to promote objectivity and provide novel perspectives that would otherwise be overlooked, potential challenges were soon revealed. First, there was the risk of conformity, where the engagement of AI would soon create a majority to thwart discussions. Second, views provided by AESER were found to be rigid and even stubborn, which frustrated the participants when they found that an argument could never be “won”.  Many also did not think AI’s input should be given equal weight and are more fit to play the role of an assistant to actual human work.

"At this stage, AI is deemed somewhat 'stubborn' by human collaborators, for good and bad,” noted Prof. Ma. “On the one hand, AI is stubborn so it does not fear to express its opinions frankly and openly. However, human collaborators feel disengaged when they could not meaningfully persuade AI to change its view. Humans varying attitudes towards AI. Some consider it to be a single intelligent entity while others regard AI as the voice of collective intelligence that emerges from big data. Discussions about issues such as authority and bias thus arise.” 

The immediate next step for the team involves expanding its scope to gather more quantitative data, which will provide more measurable and precise insights into how AI impacts group decision-making. They are also looking to explore large language models (LLMs) such as ChatGPT into the study, which could potentially bring new insights and perspectives to group discussions. 

Their study was published at the ACM Conference on Human Factors in Computing Systems in April 2023.

The source of this news is from The Hong Kong University of Science and Technology

Popular in Research

1

4 days ago

HKUST Researchers Develop Revolutionary Biomimetic Olfactory Chips to Enable Advanced Gas Sensing and Odor Detection | The Hong Kong University of Science and Technology

2

May 7, 2024

Species diversity promotes ecosystem stability

3

6 days ago

Why Are We Obsessed with Human Origins?

4

4 days ago

For MIT students, there is much to learn from crafting a chair

5

May 6, 2024

Biotechnology Students Access the Latest Equipment to Enhance their Practical Skills

Apocalyptic scenes in Texas as monster storms rip through Lone Star state, killing at least four and leaving 1million without power

4 hours from now

At Justice Alito’s House, a ‘Stop the Steal’ Symbol on Display

19 hours ago

Fed Chair Powell says inflation has been higher than thought, expects rates to hold steady

2 days ago

Donald and Melania Trump 'sleep in separate rooms,' Stormy Daniels claims

May 8, 2024

Second round of seed grants awarded to MIT scholars studying the impact and applications of generative AI

4 hours from now

A revolutionary, bold educational endeavor for Belize

3 days ago