Alejandro Mosquera López is an online safety expert and Kaggle Grandmaster working in cybersecurity. His main research interests are Trustworthy AI and NLP. ORCID iD icon https://orcid.org/0000-0002-6020-3569

Saturday, February 17, 2024

Detecting LLM hallucinations and overgeneration mistakes @ SemEval 2024

  The modern NLG landscape is plagued by two interlinked problems: On the one hand, our current neural models have a propensity to produce inaccurate but fluent outputs; on the other hand, our metrics are most apt at describing fluency, rather than correctness. This leads neural networks to “hallucinate”, e.g., produce fluent but incorrect outputs that we currently struggle to detect automatically. For many NLG applications, the correctness of an output is however mission-critical. For instance, producing a plausible-sounding translation that is inconsistent with the source text puts in jeopardy the usefulness of a machine translation pipeline. For this reason, SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes aims to foster the growing interest in this topic in the community.

In this competition participants were asked to perform binary classification to identify cases of fluent overgeneration hallucinations in two different setups: model-aware and model-agnostic tracks. In order to do this, they had to detect grammatically sound outputs which contain incorrect or unsupported semantic information, inconsistent with the source input, with or without having access to the model that produced the output.

The evaluated approach using a simple linear combination of reference models ranked 3rd in the model-agnostic track with a 0.826 accuracy.

Related work


Hallucination in AI means the AI makes up things that sound real, but are either wrong or not related to the context. This often happens because the AI has built-in biases, doesn't fully understand the real world, or its training data isn't complete. In these instances, the AI comes up with information it wasn't specifically taught, leading to responses that can be incorrect or misleading.

The following link (https://www.rungalileo.io/blog/deep-dive-into-llm-hallucinations-across-generative-tasks) provides a good analysis of hallucinations types, which are reproduced below:

"Intrinsic Hallucinations:" These are made-up details that directly conflict with the original information. For example, if the original content says "The first Ebola vaccine was approved by the FDA in 2019," but the summarized version says "The first Ebola vaccine was approved in 2021," then that's an intrinsic hallucination.
"Extrinsic Hallucinations:" These are details added to the summarized version that can't be confirmed or denied by the original content. For instance, if the summary includes "China has already started clinical trials of the COVID-19 vaccine," but the original content doesn't mention this, it's an extrinsic hallucination. Even though it may be true and add useful context, it's seen as risky as it's not verifiable from the original information.


Source of Image 1: Survey of Hallucination in Natural Language Generation
Source of image 1: Survey of Hallucination in Natural Language Generationn

Approach


Since I only took part in the model-agnostic track, I had no access to nor knowledge about the source models used for generation. For this reason, the following models were considered for feature generation:

  • COMET: Developed by Rei et al., COMET is a neural quality estimation metric that has been validated as a state-of-the-art reference-based method [Kocmi et al.].
  • Vectara HHEM: an open source model created by Vectara, for detecting hallucinations in LLMs. It is particularly useful in the context of building retrieval-augmented-generation (RAG) applications where a set of facts is summarized by an LLM, but the model can also be used in other contexts.
  • LaBSE: a measure evaluates the cosine similarity of the source and translation sentence embeddings [Feng et al.]. It's a dual-encoder approach that relies on pretrained transformers and is fine-tuned for translation ranking with an additive margin softmax loss. Two different features were extracted from this approach depending on the variables considered via cosine similarity:
    • labse1: hypotesis VS target
    • labse2: hypothesis VS source
  • SelfCheckGPT QA (MQAG) [Manakul et al.] facilitates consistency assessment by creating multiple-choice questions that a separate answering system can answer for each passage. If the same questions are asked it's anticipated that the answering system will predict the same answers. The MQAG framework is constructed of three main components: a question-answer generation system (G1), a distractor generation system (G2), and an answering system (A). Two different features were extracted via this approach, one using GPT3.5 Turbo and another using GPT4.

Finally, a logistic regression model was trained using the main competition dataset and the features described above. The feature importance is quite insightful as shown below, where we can observe that Vectara and GPT4 models produce the strongest features overall:

Image 2: Logistic regression weights of the final model.


Results


Despite of the simplicity of the approach used (mostly relying on pre-trained resources and without needing GPU infrastructure) it can be considered a strong baseline with a 0.826 Accuracy. The ranking table from Codalab is included as follows:

#UserEntriesDate of Last EntryTeam NameAccuracy 

1

liuwei

1401/30/24HIT_WL0.83067 (1)

2

bradleypallen

501/31/24
0.82933 (2)

3

amsqr

2301/30/24Alejandro Mosquera0.82600 (3)
4ahoblitz2801/31/24
0.82333 (4)
5zackchen1001/31/24OPDAI0.82133 (5)
6vasko202/01/24DeepPavlov0.82067 (6)
7BruceW1001/31/24
0.80000 (7)
8Piyush2602/01/24AMEX_AI_Labs0.79933 (8)
9Nemo1101/31/24
0.79933 (8)
10konstantinkobs101/21/24Pollice Verso0.79667 (9)
11yashkens302/01/24smurfcat0.79533 (10)
12lmeribal1402/01/24smurfcat0.79533 (10)
13janpf1501/25/24Pollice Verso0.79400 (11)
14mmazarbeik601/31/24
0.79267 (12)
15refaat1731101/31/24
0.79200 (13)
16ustc_xsong201/31/24
0.78533 (14)
17wutianqidx301/31/24
0.78067 (15)
18zhuming502/01/24
0.77333 (16)
19bond0052102/01/24SibNN0.77000 (17)
20ronghao101/22/24UMUTeam0.76933 (18)
21Nihed_B3001/31/24
0.76133 (19)
22patanjali-b401/28/24
0.74867 (20)
23ioannaior1301/30/24
0.74400 (21)
24daixiang101/27/24
0.73733 (22)
25gabor.recski401/31/24
0.73467 (23)
26Subin601/17/24
0.72800 (24)
27zahra_rahimi701/19/24HalluSafe0.72400 (25)
28Natalia_Grigoriadou1701/29/24
0.70867 (26)
29natalia65479693501/29/24
0.70867 (26)
30LexieWei101/12/24
0.68800 (27)
31novice_r8601/23/24Halu-NLP0.68667 (28)
32PaulTrust201/20/24
0.68333 (29)
33AKA802/01/24CAISA0.67667 (30)
34PooyaFallah1001/31/24SLPL SHROOM0.65800 (31)
35deema702/01/24
0.64600 (32)
36SergeyPetrakov3502/01/24Skoltech0.63000 (33)
37byun201/23/24Byun0.61667 (34)
38SOHAN2004701/31/24
0.61533 (35)
39Chinnag201/26/24ai_blues0.58733 (36)
40felix.roth802/01/24
0.57400 (37)
41yash9439901/24/24NootNoot0.51467 (38)
42abhyudaysingh302/01/24
0.49800 (39)
43ptrust801/31/24
0.48933 (40)
44Yuan_Lu601/31/24
0.46067 (41)

References

chrF++: words helping character n-grams [Popovic]

COMET: A neural framework for MT evaluation [Rei et al.]

To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation [Kocmi et al.]

Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation [Voita et al.]

Bitext mining using distilled sentence representations for low-resource languages. [Heffernan et al.]

Massively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond [Artetxe and Schwenk]Unsupervised cross-lingual representation learning at scale [Conneau et al.]

Xnli: Evaluating cross- lingual sentence representations [Conneau et al.]

MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization [Manakul et al.]

Evaluating Factuality in Generation with Dependency-level Entailment [Goyal and Durrett]

QuestEval: Summarization Asks for Fact-based Evaluation [Scialom et al.]

SummEval: Re-evaluating Summarization Evaluation [Fabbri et al.]

SummaC: Re-visiting NLI- based models for inconsistency detection in summarization [Laban et al.]

No comments:

Post a Comment