menu.header.image.eut_logomenu.header.image.ut_logo
Acasă
Despre
Comunități și Colecții
Tot DSpace
Ajutor
  • English
  • Deutsch
  • Français
  • Español
  • Ελληνικά
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Suomi
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Română
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Autentificare
  1. Acasă
  2. Navigare după Autor

Navigare după Autor "Andrei-Cristian Rad"

Filtrați rezultatele tastând primele câteva litere
Acum arăt 1 - 1 din 1
  • Rezultate pe pagină
  • Opțiuni de sortare
  • Articol
    Prompting Fairness: Learning Prompts for Debiasing Large Language Models
    (Anonymous EACL submission, pre-print, 2023-06-01) Andrei-Victor Chisca; Andrei-Cristian Rad; Camelia Lemnaru
    Large language models are prone to internalize social biases due to the characteristics of the data used for their self-supervised training scheme. Considering their recent emergence and wide availability to the general public, it is mandatory to identify and alleviate these biases to avoid perpetuating stereotypes towards underrepresented groups. We present a novel prompt-tuning method for reducing biases in encoder models such as BERT or RoBERTa. Unlike other methods, we only train a small set of additional reusable token embeddings that can be concatenated to any input sequence to reduce bias in the outputs. We particularize this method to gender bias by providing a set of templates used for training the prompts. Evaluations on two benchmarks show that our method is on par with the state of the art while having a limited impact on language modeling ability.

Social Media:

  • YouTube
  • Facebook
  • Instagram
  • LinkedIn

Informaţii de contact:

Adresa

Strada Memorandumului 28, Cluj-Napoca 400114

Telefon

+4 0264 401 200

Fax

+4 0264 592 055