At the SemEval-2024 Task 5, the organizers introduce a novel natural language processing challenge and corpus within the realm of the United States civil procedure. Every datum within the corpus comprises a comprehensive overview of a legal case, a specific inquiry associated with it, and a potential argument in support of a solution, supplemented with an in-depth rationale elucidating the applicability of the argument within the given context. Derived from a text designed for legal education purposes, this dataset presents a multifaceted benchmarking task for contemporary legal language models. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a Mistral 7B model to answer the questions provided. Our only and best submission reaches an F1-score equal to 0.5597 and an Accuracy of 0.5714, outperforming the task's baseline.
Mistral at SemEval-2024 Task 5: Mistral 7B for argument reasoning in Civil Procedure
Siino, Marco
Primo
2024-01-01
Abstract
At the SemEval-2024 Task 5, the organizers introduce a novel natural language processing challenge and corpus within the realm of the United States civil procedure. Every datum within the corpus comprises a comprehensive overview of a legal case, a specific inquiry associated with it, and a potential argument in support of a solution, supplemented with an in-depth rationale elucidating the applicability of the argument within the given context. Derived from a text designed for legal education purposes, this dataset presents a multifaceted benchmarking task for contemporary legal language models. Our manuscript delineates the approach we adopted for participation in this competition. Specifically, we detail the use of a Mistral 7B model to answer the questions provided. Our only and best submission reaches an F1-score equal to 0.5597 and an Accuracy of 0.5714, outperforming the task's baseline.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


