With this work we propose an application of the ELECTRA Transformer, fine-tuned on two augmented version of the same training dataset. Our team developed the novel framework for taking part at the Profiling Cryptocurrency Influencers with Few-shot Learning task hosted at PAN@CLEF2023. Our proposed strategy consists of an early data augmentation stage followed by a fine-tuning of ELECTRA. At the first stage we augment the original training dataset provided by the organizers using backtranslation. Using this augmented version of the training dataset, we perform a fine tuning of ELECTRA. Finally, using the fine-tuned version of ELECTRA, we inference the labels of the samples provided in the test set. To develop and test our model we used a two-ways validation on the training set. Firstly, we evaluate all the metrics on the augmented training set, and then we evaluate on the original training set. The metrics we considered span from accuracy to Macro F1, to Micro F1, to Recall and Precision. According to the official evaluator, our best submission reached a Macro F1 value equal to 0.3762.

Profiling Cryptocurrency Influencers with Few-Shot Learning Using Data Augmentation and ELECTRA

Siino M.
Primo
;
2023-01-01

Abstract

With this work we propose an application of the ELECTRA Transformer, fine-tuned on two augmented version of the same training dataset. Our team developed the novel framework for taking part at the Profiling Cryptocurrency Influencers with Few-shot Learning task hosted at PAN@CLEF2023. Our proposed strategy consists of an early data augmentation stage followed by a fine-tuning of ELECTRA. At the first stage we augment the original training dataset provided by the organizers using backtranslation. Using this augmented version of the training dataset, we perform a fine tuning of ELECTRA. Finally, using the fine-tuned version of ELECTRA, we inference the labels of the samples provided in the test set. To develop and test our model we used a two-ways validation on the training set. Firstly, we evaluate all the metrics on the augmented training set, and then we evaluate on the original training set. The metrics we considered span from accuracy to Macro F1, to Micro F1, to Recall and Precision. According to the official evaluator, our best submission reached a Macro F1 value equal to 0.3762.
2023
author profiling
cryptocurrency influencers
data augmentation
electra
few-shot learning
text classification
Twitter
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11769/607933
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? ND
social impact