Large Language Models pose risk to science with false answers, says Oxford study

November 21, 2023

The data the technology uses to answer questions does not always come from a factually correct source. These can contain false statements, opinions, and creative writingProfessor Mittelstadt explains, ‘People using LLMs often anthropomorphise the technology, where they trust it as a human-like information source. This makes the user vulnerable both to regurgitated false information that was present in the training data, and to ‘hallucinations’ - false information spontaneously generated by the LLM that was not present in the training data. For example: rewriting bullet points as a conclusion or generating code to transform scientific data into a graph. Co-authors, Professor Sandra Wachter, Oxford Internet Institute and Dieter Schwarz Associate Professor, AI, Government & Policy., Research Associate, Chris Russell, Oxford Internet Institute.

The source of this news is from University of Oxford