Technische Universität Berlin

11/16/2023 | Press release | Distributed by Public on 11/16/2023 06:10

What do you know about climate change, ChatGPT

Since it was launched in November 2022, ChatGPT has attracted more than 100 million users all over the world. AI language models, which like ChatGPT are based on machine learning and big data, work with predictions of probability. On demand they generate a response by searching through large quantities of text in the training data for word-patterns relating to the inquiry and decide, with the aid of probability distributions, which word is the next word in a sentence. That may sound pretty easy at first, but in actual fact ChatGPT's logic is essentially a black box - not even the developers themselves can really say how the AI model comes up with a certain answer. Aside from that, ChatGPT has a tendency to make meaningless guesses rather than rejecting unanswerable queries. Since, in relation to climate change, not only science-based information but also false information circulates in public discourse and the media, scientists from TU Berlin and Berliner Hochschule für Technik wanted to find out how competent ChatGPT is in answering climate-related queries correctly.

Balanced and nuanced arguments

The researchers from the Green Consumption Assistant project, which is developing an AI-based assistant that supports consumers in making more sustainable choices when shopping online, collected 95 questions concerning climate change and put them to ChatGPT. The researchers then evaluated the responses on the basis of the criteria of accuracy, relevance and consistency. The quality of the responses was evaluated by the team, which consulted publicly accessible and reliable sources of information on climate change, such as the current report of the Intergovernmental Panel on Climate Change (IPCC).

"We observed that ChatGPT provides balanced and nuanced arguments, and many responses end with a comment encouraging critical appraisal in order to avoid biased responses," says Dr. Maike Gossen from TU Berlin. For example, when asked "How is marine life influenced by climate change and how can negative influences be reduced?" ChatGPT mentioned in its response not only the reduction of greenhouse gases, but also reducing non-climatic effects of human activities like overfishing and pollution.

Overall, the quality of ChatGPT responses to the climate-related queries was high. As for inaccurately answered questions, the most common error was caused by the so-called hallucination of facts, i.e. asserting facts that cannot be verified by any sources, or even making completely invented assertions from completely invented sources. For instance, the ChatGPT response to the question "What percentage of recyclable waste is actually recycled by Germany?" was broadly correct, but wrong in the details. In some cases, ChatGPT generated false or fake information like made-up references or fake DOI or URL links. Other errors arose in cases where ChatGPT cited concrete and correct scientific sources or literature, but drew false conclusions from them.

Responses reflect societal misunderstandings

The researchers were also able to observe what is already known, namely that inaccurate responses by ChatGPT often have a plausible-sounding tone and can therefore be falsely perceived as correct. "Since text generators like ChatGPT are trained to give responses that sound right to people, the self-assured response style can lead people to believe that the response is correct," Dr. Maike Gossen notes. Furthermore, the team encountered deep-rooted prejudices in big language models. Some of ChatGPT's false responses thus reflect societal misunderstandings about effective measures against climate change, such as the overvaluation of individual behavioral changes and single actions with little impact, at the expense of structural and collective changes with greater impact. Sometimes the responses also seem excessively optimistic with regard to technological solutions as the main way to contain climate change.

Nevertheless, language models have the potential to improve the way information on climate change is communicated, the researchers say. Their ability to process and analyze large amounts of data, and to provide easy-to-understand answers to everyday questions, can make them a valuable source of information on climate change. At the same time, however, there is the danger of language models spreading false information about climate change and promoting misinformation because they reproduce an outdated view or a misconception of the issue. In conclusion, the brief study shows that checking the sources for environment and climate information is more important than ever. That is the only way to ensure the information is correct. Yet recognizing wrong answers often requires detailed specialist knowledge of the topic in question, especially since the responses look plausible at first sight. The researchers also point out that, before every case of proposed use, consideration should be given to the energy consumption of language models and to the emissions associated with training the models.

Contact

Dr.

Maike Gossen

[email protected]

Organization name Department of Social-Ecological Transformation and Sustainable Digitalization